Alerting

Why are real-time scheduled search alert jobs filling my dispatch and how do I prevent this?

mataharry
Communicator
Too many search jobs found in the dispatch directory (found=4596, warning level=4000). This could negatively impact Splunk's performance, consider removing some of the old search jobs. 

We see this error often on my search head.

I tried to clean my jobs and empty the dispatch, but it came back a few hours after.
When looking at the artifacts, they are mostly quite recent (last 24h)
and are real-time scheduled searches linked to alerts.

1 Solution

yannK
Splunk Employee
Splunk Employee

The reason is that you have so many jobs with alerting and email and tracking setup with the default retentions of 24 hours for the job artifact in the dispatch, they quickly go over 4000 artifacts in less than one day.

The workaround is to reduce drastically your jobs expiration time.

  • for the alerts with tracking change the default tracking expiration to 1 hours instead of 24h
    in the manager > searches and reports > "advanced edit" panel
    in the manager > searches and reports > edit page
    in savedsearches.conf

    alert.expires = 1h
    # it was 24h in the defaults

  • for alerts triggering an email to also change the expiration to one hour in your alerts
    in the manager > searches and reports > "advanced edit" panel
    in savedsearches.conf

    action.email.ttl = 3600
    # it was 86400 (24h) in the defaults

View solution in original post

yannK
Splunk Employee
Splunk Employee

The reason is that you have so many jobs with alerting and email and tracking setup with the default retentions of 24 hours for the job artifact in the dispatch, they quickly go over 4000 artifacts in less than one day.

The workaround is to reduce drastically your jobs expiration time.

  • for the alerts with tracking change the default tracking expiration to 1 hours instead of 24h
    in the manager > searches and reports > "advanced edit" panel
    in the manager > searches and reports > edit page
    in savedsearches.conf

    alert.expires = 1h
    # it was 24h in the defaults

  • for alerts triggering an email to also change the expiration to one hour in your alerts
    in the manager > searches and reports > "advanced edit" panel
    in savedsearches.conf

    action.email.ttl = 3600
    # it was 86400 (24h) in the defaults

nnmiller
Contributor

There's also that problem in SHC for version 6.2.6 that can be resolved with an upgrade.
link text

SHC: $SPLUNK_HOME/var/run/splunk/dispatch may fill with old artifacts after 6.2.6 upgrade.

mataharry
Communicator

in my case, it was not a shcluster. but thanks for the hint.

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...