Splunk Search

Calculating throughput

ghildiya
Explorer

In splunk logs, I have to monitor some specific events. The identifier I use to target for those events is a text 'EVENT_PROCESSED'. So my search query is:

 

 

index=testIndex namespace=testNameSpace host=\*testHost* log=\*EVENT_PROCESSED*

 

 

It fetches me ll of my target events. Please note that EVENT_PROCESSED is not an extracted field and is just a text in the event logs.

Now my aim is to find throughput for these events. So I do this:

 

 

index=testIndex namespace=testNameSpace host=\*testHost* log=\*EVENT_PROCESSED* | timechart span=1s count as throughtput

 

 

 

Is this correct way of determining throughput rate? If I change span to some other value, say 1h, then I change to:

 

 

index=testIndex namespace=testNameSpace host=\*testHost* log=\*EVENT_PROCESSED* | timechart span=1h count/3600 as throughtput

 

 

Is this correct way? 

Labels (4)
0 Karma

spitchika
Path Finder

When you use your first query, you need to say throughput in "per sec" unit. With span=1h, you can still use "count" only say throughput in "Per hour" unit. If you still want to calculation then store count into another variable like | eventstats count as "Totalcount" then do calculation using eval

Tags (1)
0 Karma

spitchika
Path Finder

index=testIndex namespace=testNameSpace host=\*testHost* log=\*EVENT_PROCESSED* | eventstats count as "TotalCount" | eval throughput=TotalCount/3600 | timechart span=1h values(throughput)

Your query might look like this.

0 Karma

ghildiya
Explorer

This displays graphs with dots, even for line chart while Line chart is expected to show continuous curves.

0 Karma

spitchika
Path Finder
Let me check
0 Karma

spitchika
Path Finder

spitchika_0-1595869228904.png

 

This works perfectly for your requirement.

index=abc host=* source=/var/opt/appworkr/logs/logname "item"
| timechart span=1h count
| eval Throughput=round(count/3600,0)
| timechart span=1h values(Throughput)

Tags (1)
0 Karma
Get Updates on the Splunk Community!

Data Management Digest – December 2025

Welcome to the December edition of Data Management Digest! As we continue our journey of data innovation, the ...

Index This | What is broken 80% of the time by February?

December 2025 Edition   Hayyy Splunk Education Enthusiasts and the Eternally Curious!    We’re back with this ...

Unlock Faster Time-to-Value on Edge and Ingest Processor with New SPL2 Pipeline ...

Hello Splunk Community,   We're thrilled to share an exciting update that will help you manage your data more ...