Splunk Enterprise Security

How can I trigger and alert for a search over 60 min but not trigger it again if I run the same search over 4 hours, 8 hours and 24 hours.

askrei
Engager

I am trying to create Notable Events using the Splunk ES risk framework and I want to setup multiple correlation searches each with different time intervals and assign them different urgencies. To start I have a searches that look for risk_score>100 over 60 min, 4hrs, 8hrs, and 24hrs. If a risk object has a risk score > 100 in 60 min it will fire the first alert. Moving forward when the 4hr, 8hr, and 24hr search runs I would not want another Notable Event to be triggered on the same risk_object. Similarity if an alert fires for a risk score > 100 within 4 hours I would not want another Notable Event to be created when the 8hr and 24hr queries run.

ngatchasandra
Builder

Hi askei,

  • you can set an alert to execute all the 60min

  • you can configure the way to stop it after a certain number of times (Expiration time)

  • you can disable it manually

So, set an alert to execute even after the expiration time when userX accumulates another 125 pts of risk between 1pm-2pm , in my view point, I think it will not be possible with splunk.

0 Karma

maciep
Champion

how often are you going to run each of those searches? Do you want to reset that check daily? Meaning, if a high risk object is found during the 60m search on Monday, would you still want it to fire if it's first seen again during the 4 hour search on Tuesday?

I'm thinking maybe a subsearch to look back over the previous searche's criteria to ignore what should have fired already. Or possibly a lookup to keep track of what has fired.

0 Karma

askrei
Engager

The idea is to run the search every 60 min and look back 60 min, every 4 hrs and look back 4 hrs, ect... If a high risk object is found during 60 min I would want a notable event to be created but for it not to generate a notable event when the search runs across 4, 8 and 24 hrs. However the search should create a new notable event if risk score increases again by 100 points in 60 min.

Example: userX accumulates a risk score of 150 between 10am-11am on Monday and a notable event is created. Since a notable has been created I would not want a new notable to fire when the search runs for 4,8, and 24hrs. However if the same user, userX accumulates another 125 pts of risk between 1pm-2pm I would want another Notable Event to be created.

I tried using a subsearch against the notable index but could not find a way to pass the risk_object through.

0 Karma

maciep
Champion

Two "answers", one much shorter than the other.

  1. Can't you include the name of the risk_object, its score, and which search triggered it by using field names to append to the notable name? And then all of that info would be available for you to parse out of the name in the notable index?

  2. Use a lookup to keep track of the data somehow.

Something with fields like

risk_object, score,search_scope
comp1,120,1
serv3,200,4
userA,170,24

So that you can then do some lookups during your search like...

[search to find risk score] | [let's say we now have fields newScore and this_search_scope] | lookup risk_lookup risk_object OUTPUT score search_scope | where newScore >= score + 100 and this_search_scope search_scope | [create a notable] | [maybe pipe to outputlookup here or do that in a different search]

So keep track of those objects that you alerted on, how high the risk score was in which search generated the notable initiall (1, 4, 8, 24). Then when you run your searches, compare the score of the current search with what you have in the lookup and also whether you want to alert again based on which search is running and now and which one caused the alert initially.

Not sure exactly how to get all of that info together and updated in a lookup, but this is the best way

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...