Splunk Search

Query to analyze if the log size has been decreased over time from hosts

sudhir_gandhe
Explorer

We use Splunk as a central logging server for both security and IT operations. I would like to know if there is a way to write an alert that will trigger when someone changed the application log level and the size of log files coming into Splunk decreases. I want to be able to monitor that by host.

Tags (1)
0 Karma
1 Solution

lguinn2
Legend

This won't check the log level, but it will tell you every source that has supplied less than its average data. The calculation is based on hourly rates, and the average is the hourly average over the last week. Note that this takes into account the fact that many sources have normal periodic variations in volume.

source=* earliest=-7d@h latest=@h
|  eval currentData = if(_time > relative_time(now(),"-1h@h"),1,0)
|  eval currentHour = strftime(relative_time(now(),"-1h@h"),"%H")
|  bucket _time span=1h
|  eval hour = strftime(_time,"%H") 
|  where hour = currentHour
|  stats count(eval(currentData=0)) as histCount count(eval(currentData=1)) as currentCount by host source _time
|  stats avg(histCount) as AvgEvents max(currentCount) as currentEvents by host source
|  where currentEvents < AvgEvents

If you have many indexes, hosts or sources, you might want to break this search down so that it isn't running everything at once. You might also want to look at the Deployment Monitor - it does something very similar but uses summary indexing. You should probably consider that option as well.
Finally, I am not sure if the Splunk-on-Splunk (SOS) app has anything like this, but it has a lot of cool stuff for managing your Splunk environment. You should definitely have this free app installed on your Splunk systems.

Finally, this search could also be used to find sources that are sending more than the usual amount of data as well. And you could use a percentile instead of an average, etc. etc. etc.

View solution in original post

sudhir_gandhe
Explorer

Perfect! Thank you very much.

0 Karma

lguinn2
Legend

This won't check the log level, but it will tell you every source that has supplied less than its average data. The calculation is based on hourly rates, and the average is the hourly average over the last week. Note that this takes into account the fact that many sources have normal periodic variations in volume.

source=* earliest=-7d@h latest=@h
|  eval currentData = if(_time > relative_time(now(),"-1h@h"),1,0)
|  eval currentHour = strftime(relative_time(now(),"-1h@h"),"%H")
|  bucket _time span=1h
|  eval hour = strftime(_time,"%H") 
|  where hour = currentHour
|  stats count(eval(currentData=0)) as histCount count(eval(currentData=1)) as currentCount by host source _time
|  stats avg(histCount) as AvgEvents max(currentCount) as currentEvents by host source
|  where currentEvents < AvgEvents

If you have many indexes, hosts or sources, you might want to break this search down so that it isn't running everything at once. You might also want to look at the Deployment Monitor - it does something very similar but uses summary indexing. You should probably consider that option as well.
Finally, I am not sure if the Splunk-on-Splunk (SOS) app has anything like this, but it has a lot of cool stuff for managing your Splunk environment. You should definitely have this free app installed on your Splunk systems.

Finally, this search could also be used to find sources that are sending more than the usual amount of data as well. And you could use a percentile instead of an average, etc. etc. etc.

Get Updates on the Splunk Community!

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...