Splunk Search

Diff on device Feature Request

mbrose
New Member

Would it be possible to alert on a device if the logs increase?
Lets say you brought a new device into splunk, let it run for a few weeks. Now I have a benchmark on what is the normal amount of logs generated. Then activate a diff type search on the device after a few weeks of normal logs. If the logs increase I get an alert to take a look!!

Tags (1)
0 Karma

rtadams89
Contributor

There are quite a few different ways to accomplish this; martin_mueller suggestion is a good one. However, speaking from experience, this is a very difficult thing to monitor without getting a ton of false positives. It all depends on the data you are watching. If for example you were to setup such an alert for Windows Event Log events, you would find that it probably trips every Monday morning when you have a bunch of users accessing the network at 8:00AM. If you adjust the thresholds to ignore this, you risk missing a similar (unexpected spike) at 11:00PM on a Sunday.

I setup some similar alerts, and spent a good week writing horrendous search queries that were many liens long. I ended up comparing the data coming in over the last hour, to the average data coming in during the same day/hour for the last 4 weeks. After fiddling with this for two months, and still not getting what I wanted, I gave up, just graphed the data on a chart, and got an intern to sit and watch it for anomalies.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Indeed, the type of data is very relevant for how you want to monitor it. If for example you have traffic data that follows local people being awake you could do averages per hour of day over a few weeks, and compare each hour with each hour's average. To refine this further you can incorporate the day of the week and in the end compare Sunday 1am to 2am with the average volume occurring on the last few Sundays in that hour, and the same for all the other 167 hours in a week 🙂

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

No need for a new feature. Just write a saved search that computes say the daily event count for the past few weeks, then calculate an average and raise an alert if yesterday was significantly above average. Something like this (untested):

your search here | timechart span=1d count | eventstats avg(count) as average | tail 1 | where count > average * 1.1

Now create an alert on that scheduled daily, triggered when there is a result.

0 Karma
Get Updates on the Splunk Community!

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...