Splunk Search

How to edit my search to measure appendcols results by minutes or greater?

bfong
Engager

Hi all,

Looking to measure cache hit rate percentage of a source/sources, listing time, source, cache hit, total hits, and cache hit rate. All fields return the information I need, except for the time field, which returns results in seconds. Using time picker also returns the same results. Is there a way to provide broader data cuts than seconds? Thanks!

index="[myIndex]" source="[mySource]" status!=null earliest=-15m@h latest=now | stats count as total by source, time | appendcols [search index="[myIndex]" source="[mySource]" cache=HIT earliest=-15m@m latest=now | stats count as hit by source, time ]| eval perc=round((hit/total)*100,2) | fields time,source,hit,total,perc
0 Karma
1 Solution

lguinn2
Legend

You could do this much more efficiently with this search, and solve your problem as well:

index="[myIndex]" source="[mySource]" (status!=null OR cache=HIT) earliest=-15m@m latest=now 
| bin _time span=5m
| stats count(eval(cache="HIT")) as hit count(eval(status!="null")) as total by _time source
| eval perc=round((hit/total)*100,2) 

FYI, you are not testing whether or not the status is null - you are testing whether status!="null"
While this may effectively be the same thing, you should probably be testing status=*
which explicitly means "only return events with a value in the status field"
And in that case, an even better version of the search would be

index="[myIndex]" source="[mySource]" (status=* OR cache=HIT) earliest=-15m@m latest=now 
| bin _time span=5m
| stats count(eval(cache="HIT")) as hit count(eval(isnotnull(status))) as total by _time source
| eval perc=round((hit/total)*100,2) 

View solution in original post

lguinn2
Legend

You could do this much more efficiently with this search, and solve your problem as well:

index="[myIndex]" source="[mySource]" (status!=null OR cache=HIT) earliest=-15m@m latest=now 
| bin _time span=5m
| stats count(eval(cache="HIT")) as hit count(eval(status!="null")) as total by _time source
| eval perc=round((hit/total)*100,2) 

FYI, you are not testing whether or not the status is null - you are testing whether status!="null"
While this may effectively be the same thing, you should probably be testing status=*
which explicitly means "only return events with a value in the status field"
And in that case, an even better version of the search would be

index="[myIndex]" source="[mySource]" (status=* OR cache=HIT) earliest=-15m@m latest=now 
| bin _time span=5m
| stats count(eval(cache="HIT")) as hit count(eval(isnotnull(status))) as total by _time source
| eval perc=round((hit/total)*100,2) 
Get Updates on the Splunk Community!

Upcoming Webinar: Unmasking Insider Threats with Slunk Enterprise Security’s UEBA

Join us on Wed, Dec 10. at 10AM PST / 1PM EST for a live webinar and demo with Splunk experts! Discover how ...

.conf25 technical session recap of Observability for Gen AI: Monitoring LLM ...

If you’re unfamiliar, .conf is Splunk’s premier event where the Splunk community, customers, partners, and ...

A Season of Skills: New Splunk Courses to Light Up Your Learning Journey

There’s something special about this time of year—maybe it’s the glow of the holidays, maybe it’s the ...