I am trying to get to the bottom of a question that I have been trying to answer for a couple weeks. I manage our Splunk Enterprise Security instance at my job, and I am having trouble with network traffic correlation rules. When digging into them I discovered how they are intertwined with the Extreme search app using commands like xswhere to call median values of network traffic.
Now, when I turn this search on, it triggers every time it runs and shows up in our incident review dashboard every few minutes. We used to have this search running for months to let it establish a baseline as I understood it, but it never stopped creating notables every few minutes.
So, I have been trying to tune this rule and the further I go the more confused I seem to get. Here is the logic I have been able to determine behind the search Unusual Volume of Network Activity:
The correlation search runs every 30 minutes and searches for a 30 minute time slot. The correlation search is as follows:
| tstats summariesonly=false allow_old_summaries=true dc(All_Traffic.src) as src_count,count from datamodel=Network_Traffic.All_Traffic | localop | xswhere count from count_30m in network_traffic is extreme or src_count from src_count_30m in network_traffic is extreme | eval const_dedup_id="Network - Unusual Volume of Network Activity - Rule"
This search contains extreme search commands that call to the values src_count and count_30m and compares those values against averages in an extreme search data model to see if the value is "extreme"
So, I went and looked for what src_count and count_30m were and I found them in the extreme search app. When I look at this app for the count_30m, I see an empty graph and a bunch of values in a table that look like this:
count_30m/ minimal low medium high extreme
1298157.87500 0.0000000000 0.0000000000 0.0000000000 0.0000000000 1.0000000000
So, it is my understanding that when the search runs, it references this search which generates a value and compares it against this table and if that value is above that count_30m value that is equal to 1.000 for extreme, it creates a notable and then that goes into our incident review dashboard.
Here is our problem: that extreme value is way too low. Our network traffic is always well above that number so the search things have unusually high network traffic. I want to know why our extreme search isn't giving us an effective baseline on our network data (the correlation search is not turned on but the count_30m search is and has been running for close to a year).
How can we edit the settings on this search or reset the search to re-establish a baseline?
We want to use extreme searches like this for networking correlation rules, but the extreme search documentation doesn't really tell us how to configure these things or make new ones that well.
TL:DR I'm looking for help with how I can tune correlation rules that use extreme search
To me, it appears as though the context generation scripts are not running, or if they are running, aren't producing the right values.
Check to ensure Network - Port Activity By Destination Port - Context Gen is enabled and running regularly.
That context gen runs the following to generate results
| tstats `summariesonly` count as dest_port_traffic_count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest_port,_time span=1d | `drop_dm_object_name("All_Traffic")` | `context_stats(dest_port_traffic_count, dest_port)` | search size>0
Run this for -30d@dand check it is producing results.
Thank you for responding to this. I ran this search for the last 30d and I can see that my context gen scripts have been running. If it's the case that the context gen searches are running but not producing the right results is there a way to fix them? Or, can we reset any values they currently have and simply restart their context generation?