Splunk Search

Calculating throughput

ghildiya
Explorer

In splunk logs, I have to monitor some specific events. The identifier I use to target for those events is a text 'EVENT_PROCESSED'. So my search query is:

 

 

index=testIndex namespace=testNameSpace host=\*testHost* log=\*EVENT_PROCESSED*

 

 

It fetches me ll of my target events. Please note that EVENT_PROCESSED is not an extracted field and is just a text in the event logs.

Now my aim is to find throughput for these events. So I do this:

 

 

index=testIndex namespace=testNameSpace host=\*testHost* log=\*EVENT_PROCESSED* | timechart span=1s count as throughtput

 

 

 

Is this correct way of determining throughput rate? If I change span to some other value, say 1h, then I change to:

 

 

index=testIndex namespace=testNameSpace host=\*testHost* log=\*EVENT_PROCESSED* | timechart span=1h count/3600 as throughtput

 

 

Is this correct way? 

Labels (4)
0 Karma

spitchika
Path Finder

When you use your first query, you need to say throughput in "per sec" unit. With span=1h, you can still use "count" only say throughput in "Per hour" unit. If you still want to calculation then store count into another variable like | eventstats count as "Totalcount" then do calculation using eval

Tags (1)
0 Karma

spitchika
Path Finder

index=testIndex namespace=testNameSpace host=\*testHost* log=\*EVENT_PROCESSED* | eventstats count as "TotalCount" | eval throughput=TotalCount/3600 | timechart span=1h values(throughput)

Your query might look like this.

0 Karma

ghildiya
Explorer

This displays graphs with dots, even for line chart while Line chart is expected to show continuous curves.

0 Karma

spitchika
Path Finder
Let me check
0 Karma

spitchika
Path Finder

spitchika_0-1595869228904.png

 

This works perfectly for your requirement.

index=abc host=* source=/var/opt/appworkr/logs/logname "item"
| timechart span=1h count
| eval Throughput=round(count/3600,0)
| timechart span=1h values(Throughput)

Tags (1)
0 Karma
Get Updates on the Splunk Community!

Reduce and Transform Your Firewall Data with Splunk Data Management

Managing high-volume firewall data has always been a challenge. Noisy events and verbose traffic logs often ...

Automatic Discovery Part 1: What is Automatic Discovery in Splunk Observability Cloud ...

If you’ve ever deployed a new database cluster, spun up a caching layer, or added a load balancer, you know it ...

Real-Time Fraud Detection: How Splunk Dashboards Protect Financial Institutions

Financial fraud isn't slowing down. If anything, it's getting more sophisticated. Account takeovers, credit ...