Splunk Search

Query to analyze if the log size has been decreased over time from hosts

sudhir_gandhe
Explorer

We use Splunk as a central logging server for both security and IT operations. I would like to know if there is a way to write an alert that will trigger when someone changed the application log level and the size of log files coming into Splunk decreases. I want to be able to monitor that by host.

Tags (1)
0 Karma
1 Solution

lguinn2
Legend

This won't check the log level, but it will tell you every source that has supplied less than its average data. The calculation is based on hourly rates, and the average is the hourly average over the last week. Note that this takes into account the fact that many sources have normal periodic variations in volume.

source=* earliest=-7d@h latest=@h
|  eval currentData = if(_time > relative_time(now(),"-1h@h"),1,0)
|  eval currentHour = strftime(relative_time(now(),"-1h@h"),"%H")
|  bucket _time span=1h
|  eval hour = strftime(_time,"%H") 
|  where hour = currentHour
|  stats count(eval(currentData=0)) as histCount count(eval(currentData=1)) as currentCount by host source _time
|  stats avg(histCount) as AvgEvents max(currentCount) as currentEvents by host source
|  where currentEvents < AvgEvents

If you have many indexes, hosts or sources, you might want to break this search down so that it isn't running everything at once. You might also want to look at the Deployment Monitor - it does something very similar but uses summary indexing. You should probably consider that option as well.
Finally, I am not sure if the Splunk-on-Splunk (SOS) app has anything like this, but it has a lot of cool stuff for managing your Splunk environment. You should definitely have this free app installed on your Splunk systems.

Finally, this search could also be used to find sources that are sending more than the usual amount of data as well. And you could use a percentile instead of an average, etc. etc. etc.

View solution in original post

sudhir_gandhe
Explorer

Perfect! Thank you very much.

0 Karma

lguinn2
Legend

This won't check the log level, but it will tell you every source that has supplied less than its average data. The calculation is based on hourly rates, and the average is the hourly average over the last week. Note that this takes into account the fact that many sources have normal periodic variations in volume.

source=* earliest=-7d@h latest=@h
|  eval currentData = if(_time > relative_time(now(),"-1h@h"),1,0)
|  eval currentHour = strftime(relative_time(now(),"-1h@h"),"%H")
|  bucket _time span=1h
|  eval hour = strftime(_time,"%H") 
|  where hour = currentHour
|  stats count(eval(currentData=0)) as histCount count(eval(currentData=1)) as currentCount by host source _time
|  stats avg(histCount) as AvgEvents max(currentCount) as currentEvents by host source
|  where currentEvents < AvgEvents

If you have many indexes, hosts or sources, you might want to break this search down so that it isn't running everything at once. You might also want to look at the Deployment Monitor - it does something very similar but uses summary indexing. You should probably consider that option as well.
Finally, I am not sure if the Splunk-on-Splunk (SOS) app has anything like this, but it has a lot of cool stuff for managing your Splunk environment. You should definitely have this free app installed on your Splunk systems.

Finally, this search could also be used to find sources that are sending more than the usual amount of data as well. And you could use a percentile instead of an average, etc. etc. etc.

Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...