Altair_Blog_hero_1920x225

Featured Articles

Introduction to Panopticon Streams: Stream Processing with No Coding

Panopticon Streams is the stream processing engine that works with Panopticon Visual Analytics software to form the Panopticon Streaming Analytics platform.
Streams connects directly to a wide range of streaming and historic sources, including Kafka, and supports these critical functions:
  • Real-Time Data Prep: Combines streaming data with historic data
  • Calculation Engine: Calculates performance metrics based on business needs
  • Aggregation Engine: Aggregates data as needed
  • Alerting Engine: Highlights anomalies against user-defined thresholds
Built for Business Users – Not IT Engineers
Although Panopticon Streams is built on Apache Kafka and Kafka Streams, it requires no coding. It is designed for use by business people.
Users can build a new directed data flow within a standard web browser. It supports all the benefits of Kafka, but without its complexity. Users don’t need to know how to write a single line of code in Java, Scala, or even KSQL. They can then start their stream processing model and begin consuming visualizing its output. Within minutes of receiving the software, they can be up and running – designing and deploying their own real-time streaming business processes.
With Panopticon Streams, firms can make full use of their existing data infrastructures:
  • Kafka
  • Tick Databases (kdb+, OneTick)
  • Message Buses (Solace, AMPS, RabbitMQ, ActiveMQ, more)
  • NoSQL (InfluxDB, Mongo, Cassandra, Elastic, more)
  • Hadoop (Hive, Impala, Spark, Livy)
  • SQL (MSSQL, Oracle, Sybase, Vertica, GreenPlum, Netezza, Postgres, MySQL, more)
  • Rest APIs & Web Sockets
  • Flat Files (XML, JSON, Text)
  • Cubes (MS SSAS, ActivePivot)
  • Market data (Thomson Reuters TREP RT)
  • CEP (Kx kdb+tick, OneTick CEP, TIBCO StreamBase, SAP ESP)
Build stream processing data flows in standard web browsers that:
  • Subscribe to streaming data inputs, including Kafka streams and others
  • Retrieve from historic and reference data sources
  • Join data streams and tables
  • Aggregate streams within defined time windows
  • Conflate streams
  • Create calculated performance metrics
  • Filter streams
  • Branch streams
  • Union and merge streams
  • Pulse output
  • Create alerts based on performance metrics against defined thresholds
  • Output to Kafka or to email, or write to databases such as kdb+, InfluxDb, or any SQL database