Dashboards & Visualizations

split and unsplit variables in same chart

marksheinbaum
Explorer

Not sure if this is feasible. Basically I would like a chart that shows the average of a statistic for different nodes and distinct count of different nodes. so the 2 searches would be something like:

1. index=xxx sourcetype=yyy |timechart avg(stat1) by node

2. index=xxx sourcetype=yyy|timechart dc(node)

Both searches would showup on the same timechart panel for the same period with the same time span.

Sorry if this is unclear, happy to clarify.

I tried eventstats, append, appendcols, and join, but they do not seem to work for this. Could be I'm misusing them though.

Labels (1)
0 Karma

marksheinbaum
Explorer

Thanks for the info. I tried both solutions and they are functionally equivalent although the "untable" approach only includes events where there are data points, i.e., if the time range is > than events within the time range, the ultimate timechart only includes the time range for the events included. I inspected each search within splunk and for a relatively short time range, I see the following, so the foreach seems to be more efficient. 

foreach approach : This search has completed and has returned 30 results by scanning 174 events in 1.185 seconds

untable approach: This search has completed and has returned 11 results by scanning 528 events in 1.674 seconds

0 Karma

bowesmana
SplunkTrust
SplunkTrust

There's always a way to get where you want to go with Splunk,

The issue you have is that timechart with a split by does not end up with a field called node anymore, as the value of node is now the column name.

You could use stats by time and other mangling of data, but you'd have to handle missing buckets of time in the average, so a simple solution is to effectively count the columns like this

 

| timechart avg(stat1) by node
| eval _nodes=0
| foreach * [ eval _nodes=_nodes + if(isnotnull('<<FIELD>>'), 1, 0) ]
| rename _nodes as nodes

 

Note the underscore in front of the field name - this prevents Splunk from including this in the * matching for foreach.

It will create one more column called nodes with the count of nodes.

marksheinbaum
Explorer

Thanks for all the replies. Looks like there are 2 approaches to explore.  The foreach approach seems to work fine. I'd like to explore the other as well. Sorry for the response delay. I had PTO and some other things to do. 

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

The problem with this is that you will get the same count for all time periods whether or not the node was "present" in that time period. The original (second) search uses dc(node) which will only count the unique instances of node present in each time period.

0 Karma

bowesmana
SplunkTrust
SplunkTrust

Yes, I realised as soon as I posted, so did the isnotnull test

ITWhisperer
SplunkTrust
SplunkTrust

Try something like this

| timechart span=1h avg(stat1) by node
| untable _time node avg
| appendpipe
    [| stats count as avg by _time
    | eval node="Nodes"]
| xyseries _time node avg
0 Karma

marksheinbaum
Explorer

Sorry I don't understand this. What is the intent of the appendpipe and xyseries? The end result should be a timechart containing average of some measurement and a count of disctinct "nodes". 

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

The appendpipe effectively reprocesses the stats event returned by the first timechart, but in order to do this they have to be broken out of the chart format, which is what the untable does. The xyseries puts the events back into the chart format with the additional column for the count of nodes for each time period.

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Data Persistence in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. What happens if the OpenTelemetry collector ...

Introducing Splunk 10.0: Smarter, Faster, and More Powerful Than Ever

Now On Demand Whether you're managing complex deployments or looking to future-proof your data ...

Community Content Calendar, September edition

Welcome to another insightful post from our Community Content Calendar! We're thrilled to continue bringing ...