Reporting

Complex search overlaying with different time frames

twistedsixty4
Path Finder

hey all,
I'm working on two saved searches, the first counts the total firewall denies per hour for the last month then averages them by weekday/hour. the second search is just the count of firewall denies per hour for the last seven days. neither of these are timecharts, instead i have use eval to create a sort field by day of the week, then by hour so the first entry is "monday: 00" and the last is "sunday: 23". my problem is i am on a very large network and these searches are very slow if i dont accelerate them. (the monthly search took six hours the first time it ran)

as far as i have gathered i need to combine these searches in order to graph them on one chart but i have not been able figure out how to apply two different time frames to the search, and even if i can what can i do to stop this search taking six hours?

here are my searches:
First:

(index=cisco* OR index=f5 OR index=app) linecount=1 | search action="blocked" | eval hour = case(date_hour=="0","00", date_hour=="1","01", date_hour=="2","02", date_hour=="3","03", date_hour=="4","04", date_hour=="5","05", date_hour=="6","06", date_hour=="7","07", date_hour=="8","08", date_hour=="9","09", date_hour=="10","10", date_hour=="11","11", date_hour=="12","12", date_hour=="13","13", date_hour=="14","14", date_hour=="15","15", date_hour=="16","16", date_hour=="17","17", date_hour=="18","18", date_hour=="19","19", date_hour=="20","20", date_hour=="21","21", date_hour=="22","22", date_hour=="23","23", date_hour=="01","01", date_hour=="02","02", date_hour=="03","03", date_hour=="04","04", date_hour=="05","05", date_hour=="06","06", date_hour=="07","07", date_hour=="08","08", date_hour=="09","09")  | eval date_wday = strftime(_time, "%A") | eval day:hour = date_wday + ": " + hour | eval sort_field=case(date_wday=="Monday",1, date_wday=="Tuesday",2, date_wday=="Wednesday",3, date_wday=="Thursday",4, date_wday=="Friday",5, date_wday=="Saturday",6, date_wday=="Sunday",7) | stats count by sort_field,day:hour | stats avg(count) by sort_field,day:hour | fields - sort_field

Second:

(index=cisco* OR index=f5 OR index=app) linecount=1 | search action="blocked" | eval hour = case(date_hour=="0","00", date_hour=="1","01", date_hour=="2","02", date_hour=="3","03", date_hour=="4","04", date_hour=="5","05", date_hour=="6","06", date_hour=="7","07", date_hour=="8","08", date_hour=="9","09", date_hour=="10","10", date_hour=="11","11", date_hour=="12","12", date_hour=="13","13", date_hour=="14","14", date_hour=="15","15", date_hour=="16","16", date_hour=="17","17", date_hour=="18","18", date_hour=="19","19", date_hour=="20","20", date_hour=="21","21", date_hour=="22","22", date_hour=="23","23", date_hour=="01","01", date_hour=="02","02", date_hour=="03","03", date_hour=="04","04", date_hour=="05","05", date_hour=="06","06", date_hour=="07","07", date_hour=="08","08", date_hour=="09","09") | eval date_wday = strftime(_time, "%A") | eval day:hour = date_wday + ": " + hour | eval sort_field=case(date_wday=="Monday",1, date_wday=="Tuesday",2, date_wday=="Wednesday",3, date_wday=="Thursday",4, date_wday=="Friday",5, date_wday=="Saturday",6, date_wday=="Sunday",7) | stats count by sort_field,day:hour | fields - sort_field

thanks for the help.


UPDATE:

i've tried to factor the searches together, but its not returning right..

(index=cisco* OR index=f5 OR index=app) linecount=1 | search action="blocked" | eval hour = case(date_hour=="0","00", date_hour=="1","01", date_hour=="2","02", date_hour=="3","03", date_hour=="4","04", date_hour=="5","05", date_hour=="6","06", date_hour=="7","07", date_hour=="8","08", date_hour=="9","09", date_hour=="10","10", date_hour=="11","11", date_hour=="12","12", date_hour=="13","13", date_hour=="14","14", date_hour=="15","15", date_hour=="16","16", date_hour=="17","17", date_hour=="18","18", date_hour=="19","19", date_hour=="20","20", date_hour=="21","21", date_hour=="22","22", date_hour=="23","23", date_hour=="01","01", date_hour=="02","02", date_hour=="03","03", date_hour=="04","04", date_hour=="05","05", date_hour=="06","06", date_hour=="07","07", date_hour=="08","08", date_hour=="09","09")  | eval date_wday = strftime(_time, "%A") | eval day:hour = date_wday + ": " + hour | eval sort_field=case(date_wday=="Monday",1, date_wday=="Tuesday",2, date_wday=="Wednesday",3, date_wday=="Thursday",4, date_wday=="Friday",5, date_wday=="Saturday",6, date_wday=="Sunday",7) | append [ search earliest=-7d@h latest=@h | stats count as "7d Count" by sort_field,day:hour | fields - sort_field ] | stats count by sort_field,day:hour | stats avg(count) by sort_field,day:hour | fields day:hour, avg(count), "7d Count" 

UPDATE#2:

i've been doing some digging and i'm wondering, is it possible to change my last stats function to something like this:

stats avg(count) as "30d Average", count(where(earliest=-7d@h latest=@h)) as "7d Count" by sort_field,day:hour 
Tags (2)
1 Solution

twistedsixty4
Path Finder

alright i finally figured it out. my problem was that the time limit on a subsearch would not allow for me to properly combine these two searches. however the searches themselves are fundamentally identical, its just that the one applies an avg function on top it took alot of different approaches to finally get to what actually worked.

what i ended up doing was bin-ing _time to span=1h then doing an if statement coupled with a relative_time function to take count values within the last 168 hours. here's my search if anyone is interested.

(index=cisco* OR index=f5 OR index=app) linecount=1 | search action="blocked" | eval date_hour = strftime(_time, "%H") | eval date_wday = strftime(_time, "%A") | eval day:hour = date_wday + ": " + date_hour | eval sort_field=case(date_wday=="Monday",1, date_wday=="Tuesday",2, date_wday=="Wednesday",3, date_wday=="Thursday",4, date_wday=="Friday",5, date_wday=="Saturday",6, date_wday=="Sunday",7) | bucket _time span=1h | stats count by sort_field,day:hour,_time | eval last_week = if(_time>=relative_time(now(), "-168h@h"), count, null()) | stats avg(count) as 30d_Average, values(last_week) as 7d_Count by sort_field,day:hour| fields - sort_field 

NOTE:

in case you didn't notice I also spent a good chunk of time factoring down my search to make it more readable.

View solution in original post

twistedsixty4
Path Finder

alright i finally figured it out. my problem was that the time limit on a subsearch would not allow for me to properly combine these two searches. however the searches themselves are fundamentally identical, its just that the one applies an avg function on top it took alot of different approaches to finally get to what actually worked.

what i ended up doing was bin-ing _time to span=1h then doing an if statement coupled with a relative_time function to take count values within the last 168 hours. here's my search if anyone is interested.

(index=cisco* OR index=f5 OR index=app) linecount=1 | search action="blocked" | eval date_hour = strftime(_time, "%H") | eval date_wday = strftime(_time, "%A") | eval day:hour = date_wday + ": " + date_hour | eval sort_field=case(date_wday=="Monday",1, date_wday=="Tuesday",2, date_wday=="Wednesday",3, date_wday=="Thursday",4, date_wday=="Friday",5, date_wday=="Saturday",6, date_wday=="Sunday",7) | bucket _time span=1h | stats count by sort_field,day:hour,_time | eval last_week = if(_time>=relative_time(now(), "-168h@h"), count, null()) | stats avg(count) as 30d_Average, values(last_week) as 7d_Count by sort_field,day:hour| fields - sort_field 

NOTE:

in case you didn't notice I also spent a good chunk of time factoring down my search to make it more readable.

Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...