Splunk Enterprise Security

DMA - datamodel artifacts filling up the dispatch dir

Builder

As the default ES DMA schedule is every 5min, and the ACCELERATE_DM_Splunk_SA_CIM*ACCELERATE jobs TTL is 24h, our dispatch directory is filling up

24h TTL with 5min runs means 288 jobs per DM,
which is obviously rewarding us with the famous "The number of search artifacts in the dispatch directory is higher than recommended "

Is it expected for these jobs TTL to be 24h? I was somehow expecting it to be 2p
Ran btool limits list --debug but there isn't any local conf overwriting the defaults

NOTE:
results obtained from:

| rest /services/search/jobs/ 
| search label=_ACCELERATE_DM_Splunk_SA_CIM_*ACCELERATE_
0 Karma

SplunkTrust
SplunkTrust

I don't think the ttl is that long. if you change your search to something like

|rest /services/search/jobs | search label=<yourdmsearch> | fields - fieldMeta* | table label, defaultSaveTTL, defaultTTL, diskUsage, dispatchState,"eai:acl.ttl", id, isDone, resultCount, runDuration, sid, ttl, updated

you can see that the ttl is actually 5mins. What do you see for the above search in your case. You may want to start with one DM at a time - ACCELERATE_DM_Splunk_SA_CIM_Authentication_ACCELERATE*

You can also refer to Datamodel Audit dashboard to see if you have high number of run duration, any errors to help debug further.

Also, whats your Splunk Core, CIM and ES vesion?

0 Karma

Builder

Thanks Lakshman
Splunk 7.2.4, ES 5.2.2

Confirming eai:acl.ttl = 86400 for all these DMA jobs and the ttl = eai:acl.ttl - updated time

Taking a look at the dispatch dir of those jobs is also consistent with the behavior I mentioned

dispatch]$ head             scheduler__nobody_U3BsdW5rX1NBX0NJTQ__RMD5b63315d8c5a60cfd_at_1553793000_27707/metadata.csv
access,owner,app,ttl
"read : [ splunk-system-user ], write : [ splunk-system-user ]","splunk-system-user","Splunk_SA_CIM",86400
0 Karma

Builder

Can anyone shed some light into this?
I'm not expecting these jobs to be needed for this long so will I just have to setup a cronjob to remove ACCELERATE_DM_Splunk_SA_CIM*ACCELERATE jobs older than 2h or so ?

0 Karma

SplunkTrust
SplunkTrust

You could setup a job to clean the directories a few times a week or even a day if you need to.
You can either increase the limit (I think that option is shown in the GUI also) or reduce the time to live. This doc gives an idea what you can get rid off: https://docs.splunk.com/Documentation/Splunk/7.2.5/Search/Dispatchdirectoryandsearchartifacts

Does that help?

Skalli

0 Karma
State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!