Splunk Search

What to do when appendcols command can't handle larger counts?

f5x6kb8
Explorer

We need to determine a 30 day average based on the count of two events, a request and a response. The issue is that each generate upward of 30K each...hourly. The search below works great for short durations, but once the duration increases, the count data from the appendcols is all over the map.
Any ideas would be greatly appreciated!!

index=blah blah blah 
| search field="A Response*"  
| timechart span=1h count as response  
| appendcols [ search field="A Request*"   
| timechart span=1h count as request ]  
| eval reciprocal=round(response/request,2)*100
0 Karma
1 Solution

DalJeanis
Legend

Eliminate appendcols by just processing the data once for both types.

index=blah blah blah 
 | search field="A Response*" OR field="A Request*"
 | bin _time span=1h
 | eval request=if(like(field,"A Request%"),1,0)
 | eval response=if(like(field,"A Response%"),1,0)
 | timechart span=1h sum(request) as request, sum(response) as response  
 | eval reciprocal=round(response/request,2)*100

The search might also be written like this. not sure which is more efficient, and not sure whether the .* on the end is needed.

  | regex field="^A Re(sponse|quest).*"

View solution in original post

0 Karma

DalJeanis
Legend

Eliminate appendcols by just processing the data once for both types.

index=blah blah blah 
 | search field="A Response*" OR field="A Request*"
 | bin _time span=1h
 | eval request=if(like(field,"A Request%"),1,0)
 | eval response=if(like(field,"A Response%"),1,0)
 | timechart span=1h sum(request) as request, sum(response) as response  
 | eval reciprocal=round(response/request,2)*100

The search might also be written like this. not sure which is more efficient, and not sure whether the .* on the end is needed.

  | regex field="^A Re(sponse|quest).*"
0 Karma

f5x6kb8
Explorer

Worked like a charm! Thank you very much for taking the time. Have a Great Weekend!

0 Karma

niketn
Legend

You need to summarize data per hour (which implies you will reduce daily events from 30K*24 to 24). Then you will be able to run subsearches like appendcol without dropping data.

Refer to sitimechart and collect commands on Splunk documentation for your reference.
https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Sitimechart
https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Collect

____________________________________________
| makeresults | eval message= "Happy Splunking!!!"
0 Karma

somesoni2
Revered Legend

and/or it may be possible to avoid the appendcols altogether. We can have a look if you can share full search of yours.

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

[Puzzles] Solve, Learn, Repeat: Character substitutions with Regular Expressions

This challenge was first posted on Slack #puzzles channelFor BORE at .conf23, we had a puzzle question which ...

Splunk Community Badges!

  Hey everyone! Ready to earn some serious bragging rights in the community? Along with our existing badges ...

[Puzzles] Solve, Learn, Repeat: Matching cron expressions

This puzzle (first published here) is based on matching timestamps to cron expressions.All the timestamps ...