Hi meetmash, sorry for the delay. What I want to achieve, is to have Splunk ES perform an adaptive response. That AR is supposed to be attached to my detection rules and notify us on an alternative channel (like Slack, Webex, Teams etc.). However, we only want to get notified when the urgency (after the risk correlation) of the notable is higher than low. I've been digging into this for some time and figured out that the event_id, which is required to get the urgency, can be received through makro ´get_event_id_meval´. There is also a makro named ´get_urgency´but that one does not take the risk calculation into account. Ultimately, I found other makros that seem to influence the final urgency and ended up with the following script: [...]
orig_sid = helper.settings.get('sid')
events = helper.get_events()
for event in events:
orig_rid = event.get('rid')
kwargs_blockingsearch = {"output_mode":"json", "earliest_time":"-10m", "latest_time":"now"}
search_query = f"""SEARCH index=notable orig_sid={orig_sid} orig_rid={orig_rid}
| eval ‘get_event_id_meval‘ rule_id=event_id
| ‘get_correlations‘
| ‘get_urgency‘
| ‘risk_correlation‘"""
try:
job = service.jobs.create(search_query, **kwargs_blockingsearch)
while not job.is_done():
sleep(.2)
result_reader = results.JSONResultsReader(job.results(output_mode="json"))
urgency_levels = {"critical", "high", "medium"}
for result in result_reader:
if isinstance(result, dict):
if result.get("urgency") in urgency_levels:
""" Here comes the code to notify us on alternative channel"""
else:
"""Event not higher than low, ignore"""
except Exception as e:
"""some exception logging""" I tried hard coding some known sid and rid and the script worked fine. However, attaching this as AR to a detection rule doesnt trigger the notification. Any clue what I am missing?
... View more