Splunk Enterprise Security

What is Macro : modular_action_invocations(Apps: Splunk_SA_CIM)?

jkay2016
Engager

Hi

I noticed a quite a number job running in the background attributed to the macro "modular_action_invocations". From the job activity , the jobs are owned by users and link to the search Apps. And some of these jobs take quite a far bit of time to run , in very wide range of seconds to hours to complete. It had affects our operation when the concurrent search limit is reached.

Appreciate some help if anyone can provide more information on the marco.
I'm aware that I can increase the CPU/hardware resource to mitigate the situation but would like to find out more into it.

Thanks
Jim

Additional note :
Not sure if it is related ,
- most my team analysts are using the incident review module on Enterprise security to manage the notable events. At any point of time , there are 3-4 analysts working on it.

Macro : modular_action_invocations(2)

Apps : Splunk_SA_CIM

Labels (1)

gabriel_vasseur
Contributor

This issue arised again for us this week. Users were reporting very poor performance from splunk, although it didn't look like the platform was heavily used. The most obvious symptom was a large amount of ad-hoc searches running modular_action_invocations each taking several minutes to finish.

It was interesting to read https://community.splunk.com/t5/Splunk-Enterprise-Security/modular-actions-invocations-macro/m-p/390... that this search is triggered when a notable is expanded within Incident Review, so that is bound to happen a lot.

Looking at the macro definition and running it manually, I couldn't reproduce the slowness: it seemed to complete fine. I checked what @lakshman239 recommended and everything seemed to be in order.

That said I did notice that the macro starts with "tstats summariesonly=false" and I know from experience that tstats searches can be surprisingly slow even if data model summaries cover virtually all the search time window, so I changed it to "tstats summariesonly=true".

This seems to have helped the issue although it's early to tell.

I'm aware that I've introduced the risk of the search not returning results for recent notables, however we don't really use fancy adaptive responses so I don't think it'll impact us.

I'm also aware I've made upgrading ES in the future more difficult: if that macro is updated by the upgrade, my local tweak will override the upgraded version and it might break things.

But desperate times calls for desperate measures. I hope this helps and/or someone can provide a better solution.

0 Karma

gabriel_vasseur
Contributor

We have the same problem. Lots of analysts are hitting their maximum concurrent search quotas and getting their searches queued, sometimes for several minutes, even though they are not running much at all. I'm disappointed no one answered that question. Did you figure out any solution yourself?

lakshman239
Influencer

You could do couple of things?
1. review the search quotas for analysts and increase them to reduce queuing. [ if you have disk space ]
2. modular invocations are used within adaptive response actions invocations.
3. check 'Audit' - 'Data Model Audit' page within ES app and see if the 'Splunk_Audit' is 100% complete and the size and runduration. A healthy instance should show 100% and is_inprogress = 0 [ can show 1 occassionally but not always and with reasonable size based on your retention etc..

https://docs.splunk.com/Documentation/CIM/4.12.0/User/SplunkAuditLogs
4. Review the number of concurrent searches/skipped searches on the search heads to understand capacity utilisation, as if the boxes are heavily loaded, it could impact analyst performance.
5. Review hardware utlizations - CPU/memory trend to ensure no resource starvation.

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...