Splunk Search

How do I graph a continuous timechart over a long period of time?



I have implemented a dashboard in Splunk Enterprise that uses a time chart (among other things) that graphs network Jitter Values that I have pulled from syslog files. The purpose of this graph is to graph "Historic real time jitter values". The user will enter in a time range and the Jitter values in that time range are graphed over time. I must have a span of 1s on the time chart so we can see every single data point (events can happen at any second ) and the x-axis should be continuous so we see what is happening in "real-time". I understand that this will produce a lot of pointless data and the truncation limit of the browser will have to be overridden (depending on how many data points I need to graph)

I am somewhat new to Splunk and I am able to get the correct graph for certain cases. The problem occurs when I set my time range for my search too long.

Here is my search so far:

<search id="baseSearch">
sourcetype=syslog AND           
(jitter OR CC:) |       
rex field=_raw "source=.(?P<Multicast_Address>\d*.\d*.\d*.\d*)" | 
search Multicast_Address=$multicast_address_token$  |         
rex field=_raw "[Jj]itter\s+\((?P<Jitter>\d+)" |   
rex field=_raw "^(?:[^\(\n]*\(){3}(?P<dropped_packets>\d+)" |  
search Jitter &gt;=$jitter_start_token$ OR dropped_packets&gt;=1 |
fillnull value=- |    
rex field=_raw "^\w+\s+\d+\s+(?P<UTC_Time>[^ ]+)(?:[^ \n]* ){3}(?P<UTC_Date>\d+\-\d+\-   \d+)" | 
rename name AS "Stream Name", host AS "Device IP" , UTC_Time AS "UTC Time", UTC_Date AS "UTC Date", dropped_packets AS "Dropped/Lost Packets" |</query>


    <input type="text" searchWhenChanged="true" token="trunication_token">
    <label>trunication limit</label>

    <search base="baseSearch">
     <query>timechart fixedrange=f cont=t span=1s limit=0 list(Jitter) by Multicast_Address</query>
    <option name="charting.chart.showMarkers">true</option>
    <option name="charting.data.count">0</option>
    <option name="charting.chart">line</option>
    <option name="charting.axisY2.enabled">undefined</option>
    <option name="charting.drilldown">all</option>
    <option name="charting.chart.nullValueMode">zero</option>
    <option name="charting.axisTitleX.text">UTC Time</option>
    <option name="charting.axisTitleY.text">Jitter (ms)</option>
    <option name="charting.chart.resultTruncationLimit">$trunication_token$</option>

    <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
    <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
    <option name="charting.axisTitleX.visibility">visible</option>
    <option name="charting.axisTitleY.visibility">visible</option>
    <option name="charting.axisTitleY2.visibility">visible</option>
    <option name="charting.axisX.scale">linear</option>
    <option name="charting.axisY.scale">linear</option>
    <option name="charting.axisY2.scale">inherit</option>
    <option name="charting.layout.splitSeries">0</option>
    <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
    <option name="charting.legend.placement">right</option>

Once the search time range is too long, the data no longer becomes continuous and will graph only the points with data, and not fill in the values with zero. I have set my charting.chart.resultTruncationLimit option with a token so I know that it is not the problem. I am thinking it must be some sort of data limit, or time limit, but I am not sure.

Again I am new to Splunk, so if the way I have gone about this timechart all wrong or this search is not ideal, please let me know too!

Any help would be great!!!


0 Karma


First, some general guidance:

A good practice in general, you'll should take the rex extracted fields and make then normal field extractions so that Splunk can use it as a directly searchable field. I generally suggest people use the rex for one-off stuff, ad-hoc searches, and for figuring out what a usable field should be, then convert it to a field extraction for future use. Anything that gets scheduled, repeated, or dashboarded should have as little regex as possible.

Potential issue:
The span needs to go up when the timeselector is increased. 1s increments for anything more than an hour becomes hard to use and increasingly expensive for splunk to return. It can only draw so many points in a graph, so that's where I would start tweaking.

0 Karma


Thank you for you help! I Appreciate your general advice I have just implemented that in my code.

As for the graph I understand that this graph would point a lot of data and if I would increase the span I would see less data and the graph would remain continuous. However if I keep the span at 1s at some point it seems at though splunk ignores the cont=true and span=1s and decides to graph it not continuously. I understand why Splunk decides to ignore the cont=true and span=1s (it is most likely due to too much data to plot) but I am curious as to exactly WHEN Splunk will decide to do this? There is clearly a limit that I am passing ( I don't think it is the truncation limit because I have tested that) but I can't seem to find any documentation about this limit.

0 Karma


That's a great question, I'm not sure what the technical limit is. Support may be able to dig that out.

I've always looked at it from a reasonably usable viewpoint. Instead of 1s spans, candlestick charting is probably better and from there you can go to larger increments and that should remain consistent since it doesn't sound like there is significance in the 1 second gaps.

Generally, this is how I build graphs

Suggested Span:
1s = anything less than 5 minutes
1m = anything less than 4 hours
1 hour = anything less than 2 days
4 hours = anything less than 7 days
12 hours = anything less than 30 days
1 day = anything less than 6 months
1 week = anything less than 1 year
1 month = anything less than 2 years

I haven't had to graph anything more than that. That should allow you to remain continious, but also remain usable. If you are reporting on a lot of data, you may want to think about accelerated searches, summary indexing, or using tstats to help performance.

0 Karma
Get Updates on the Splunk Community!

.conf24 | Day 0

Hello Splunk Community! My name is Chris, and I'm based in Canberra, Australia's capital, and I travelled for ...

Enhance Security Visibility with Splunk Enterprise Security 7.1 through Threat ...

 (view in My Videos)Struggling with alert fatigue, lack of context, and prioritization around security ...

Troubleshooting the OpenTelemetry Collector

  In this tech talk, you’ll learn how to troubleshoot the OpenTelemetry collector - from checking the ...