Splunk Enterprise Security

Throttle unless count increases

tracegordon
Engager

In an Enterprise Security Correlation Search I have a report that emails out when an email address is seen across multiple unique accounts. Each unique account has its own row and I have a field called dc_accounts that shows a count of how many unique accounts the email address is present on. I'm trying to determine the best way to throttle within the correlation search. The search time window spans 90 days and I only want the rows to show up on the report again if the count increases (so accounts are not re-reviewed). I think the best way to throttle is to throttle the fields account, email and dc_accounts. But I only want to throttle if dc_accounts increases from the previous value. So if the account and email values are the same and dc_accounts is less than or equal to the previous value I don't want to see it.

Is there a way to achieve this? I have thought about writing my output of each run to a lookup file and then have an inputlookup earlier in the search that retrieves the previous value and compares. But I am not sure if that is the only option available.

Portion of search that is relevant is below. There are multiple roles per account which is the reason for the multi-value names and user-id fields.

| dedup first_name, last_name, acct_no, user_id, email 
| stats list(first_name) AS first_name list(last_name) AS last_name list(user_id) AS user_id values(acct_no) AS acct_nos by email 
| stats list(first_name) AS first_name list(last_name) AS last_name list(user_id) AS user_id dc(acct_nos) AS dc_accounts by email,acct_nos 
| search dc_accounts>1 
| mvexpand first_name 
| mvexpand last_name 
| mvexpand user_id 
| dedup email, acct_nos, first_name, last_name, user_id 
| stats values(email) AS email values(first_name) AS first_name values(last_name) AS last_name values(user_id) AS user_id by acct_nos, dc_accounts 
| eval first_name=mvjoin(first_name, " ; ") 
| eval last_name=mvjoin(last_name, " ; ") 
| eval user_id=mvjoin(user_id, " ; ") 
| table first_name, last_name, acct_nos, user_id, email, dc_accounts 
| sort 0 -dc_accounts,email

DalJeanis
SplunkTrust
SplunkTrust

If you want to alert only when the dc goes up, then you might need to write out records when you alert and read in your prior alerts to decide whether to alert again.

This answer provides some code that was specific to throttling for an hour, but something like this strategy could work for you.

https://answers.splunk.com/answers/548711/how-to-throttle-an-alert-using-more-than-one-field.html


You could run your query for 91 days, keep _time on the records, sort them in order and use streamstats with timewindow=90d to calculate your values.

Then use streamstats again with current=f window=1 to copy the dc number forward from yesterday's record and compare.

However, your search already looks overly complicated. I'm very suspicious of the list followed by three mvexpands. That is creating a cross product with no business case for it. Those probably should be replaced by this, although I suspect the whole search can be much further simplified...

| eval myFan = mvrange(0,mvcount(first_name ))
| mvexpand myFan
| eval first_name = mvindex(first_name,myFan) 
| eval last_name = mvindex(last_name,myFan)  
| eval user_id =  mvindex(user_id,myFan)

Could you elaborate a little more for us on what the underlying record and account structure is? Just three or four fake records would probably be enough to clear it up.

0 Karma
Get Updates on the Splunk Community!

Dashboard Studio Challenge - Learn New Tricks, Showcase Your Skills, and Win Prizes!

Reimagine what you can do with your dashboards. Dashboard Studio is Splunk’s newest dashboard builder to ...

Introducing Edge Processor: Next Gen Data Transformation

We get it - not only can it take a lot of time, money and resources to get data into Splunk, but it also takes ...

Take the 2021 Splunk Career Survey for $50 in Amazon Cash

Help us learn about how Splunk has impacted your career by taking the 2021 Splunk Career Survey. Last year’s ...