Community Blog
Get the latest updates on the Splunk Community, including member experiences, product education, events, and more!

SignalFlow: What? Why? How?

CaitlinHalla
Splunk Employee
Splunk Employee

What is SignalFlow?


Splunk Observability Cloud’s analytics engine, SignalFlow, opens up a world of in-depth analysis on your incoming telemetry data. SignalFlow is the computational backbone that powers all charts and detectors in Splunk Observability Cloud, but the statistical computation engine also allows you to write custom programs in the SignalFlow programming language (modeled after and similar to Python). Writing SignalFlow programs definitely isn’t required for a complete observability practice, but if you want to perform advanced computations on your Splunk Observability Cloud data, SignalFlow is your friend. In this post, we’ll explore the whys and hows of SignalFlow so you can run these computations, aggregations, and transformations on your data and stream the results to detectors and charts for custom and in-depth observability analytics.

Why use SignalFlow?


Most Splunk Observability Cloud use cases don’t require complex computations. Out-of-the-box charts and detectors make building a complete observability solution easy. But sometimes there is a need for more detailed and advanced insight. SignalFlow can be used to:

  • Define custom behavior or conditions for fine-tuned control over your monitoring so you can tailor your charts and detectors to your specific needs
  • Aggregate metrics from different applications, cloud providers, or environments to unify data
  • Troubleshoot by correlating metrics across many different sources – using SignalFlow during an incident helps provide deep, real-time investigation into root cause
  • Detect trends over time or compare historical data to increase resiliency, reduce downtime, and capacity plan – i.e. correlate resources with user activity over time to optimize resource allocation
  • Stream metric data to background analytics jobs – i.e. execute computations across a population over time
  • Create reports or visualizations in third-party UIs so you can stream data out of Splunk Observability Cloud – i.e. in a service provider use case you can stream data to your own UIs and expose it to your customers
  • Correlate business metrics with infrastructure and/or application metrics to understand how performance impacts customers – i.e. how does latency impact customer renewal rates

There are many reasons why you might need to tap into SignalFlow; generally, if you need customized analytics, SignalFlow is the answer. So let’s see how it works!

How to use SignalFlow


You can define SignalFlow queries directly in the Splunk Observability Cloud UI or programmatically using the
SignalFlow API. If you open up or create a chart in the UI, you’ll see the chart builder view: 

chart UI.png

If you select View SignalFlow, you can dive right into the SignalFlow that powers the chart and use it as a template for additional programs: 

UI signalflow.png

The same is true for detectors. If you open up a detector, you can select the kebab icon to Show SignalFlow 

show signalflow detector.png

detector signalflow .png

SignalFlow programs outside of the Splunk Observability Cloud UI typically live within code configurations for detectors and dashboards (see our Observability as Code post). When you create a chart or detector using the API, you can specify a SignalFlow program as part of the request. Here’s an example of defining a detector using Terraform, where the program_text is the SignalFlow program: 

terraform.png

You can also use the Signalflow API to run programs in the background and receive the results asynchronously in your client.

Let’s take a look at some SignalFlow functions and methods we can use to build out charts and detectors. 

SignalFlow Functions and Methods in Charts


Most SignalFlow programs begin with a
data() block. The data function is used to query data and is the main way to create stream objects, which are similar to a time-parameterized NumPy array or pandas DataFrame. Queries can run against both real-time incoming streams and historical data from the systems you monitor. In SignalFlow, you specify streams as queries that return data. Here’s a template for the data function: 

data func.png

We can expand on the data function in many ways. For example, here’s what it would look like to query for the CPU utilization metric and filter by host:

data func CPU utilization.png

We can also add/chain methods or functions to our data block. Here are examples of using the mean method to look at mean CPU utilization, mean CPU utilization by Kubernetes cluster, and mean CPU utilization over the last hour: 

data func k8s cpu utilization.png

Operations like mean, variance, percentile, exclude, ewma, timeshift, rate of change, standard deviation, map(lambda), and others are available as methods on numerical streams.

Here’s an example where our data stream, signal, is the CPU utilization with a filter of host, and we can add functions to timeshift by a week and two weeks, and then find the max value: 

timeshift func.png

Comparing the max CPU utilization for two separate time series can’t actually be accomplished using the chart plot editor in the Splunk Observability Cloud UI, so this is an instance of where using SignalFlow is necessary. 

To actually output these stream results to a chart, we need to call the publish() method: 

publish.png

We’ve now built out a chart using SignalFlow 🎉! We can also do this with detectors – read on.

SignalFlow Functions and Methods in Detectors


Detectors evaluate conditions involving one or more streams, and typically compare streams over periods of time – i.e. disk utilization is greater than 80% for 90% of the last 10 minutes. When building detectors using SignalFlow, we still start with our
data streams, and then transform our data streams using boolean logic: 

detector _=.png

Note: when setting static thresholds in the UI, thresholds can only be greater than or less than. But as you can see with SignalFlow, we can specify greater than or equal to static thresholds. 

We can use these when statements on their own or combined with and, or, not statements to publish our alert conditions and build out our detectors: 

detector with publish and label.png

The detect streams in this example are similar to data streams. Detect streams turn our boolean statement – when our signal is greater than 90 for 1 minute – into an event stream. When this statement is evaluated as true, an event will fire and be published to an event stream. This is what triggers an alert. 

Note: event streams are evaluated and published in real time as metrics are ingested, enabling you to find problems faster and speed up your MTTD.

Every publish method call in a SignalFlow detect statement corresponds to a rule on the Alert Rules tab in the Splunk Observability Cloud UI. The label inside the publish block is displayed next to the number of active alerts in the Alert Rules tab: 

signal _ 90 CPU signal flow in ui.png

You can create your detectors using the SignalFlow API, but if you want to use SignalFlow to build detectors directly in the Splunk Observability Cloud detector UI, you can append /#/detector/v2/new to your organization URL to do so: 

detector signalflow ui.png

Wrap up


While working with SignalFlow is not required, it can help customize and advance your observability practice. A great place to start is editing the SignalFlow for existing charts and detectors in the Splunk Observability Cloud UI or using observability as code with SignalFlow programs. In no time, you’ll be building out SignalFlow program background jobs and streaming customized analytics to meet all your observability and business needs. 

New to Splunk Observability Cloud? Try it free for 14 days

Resources

Get Updates on the Splunk Community!

SplunkTrust Application Period is Officially OPEN!

It's that time, folks! The application/nomination period for the 2025 SplunkTrust is officially open! If you ...

Splunk Answers Content Calendar, June Edition II

Get ready to dive into Splunk Dashboard panels this week! We'll be tackling common questions around ...

Splunk Observability Cloud's AI Assistant in Action Series: Auditing Compliance and ...

This is the third post in the Splunk Observability Cloud’s AI Assistant in Action series that digs into how to ...