Alerting

Why are real-time scheduled search alert jobs filling my dispatch and how do I prevent this?

mataharry
Communicator
Too many search jobs found in the dispatch directory (found=4596, warning level=4000). This could negatively impact Splunk's performance, consider removing some of the old search jobs. 

We see this error often on my search head.

I tried to clean my jobs and empty the dispatch, but it came back a few hours after.
When looking at the artifacts, they are mostly quite recent (last 24h)
and are real-time scheduled searches linked to alerts.

1 Solution

yannK
Splunk Employee
Splunk Employee

The reason is that you have so many jobs with alerting and email and tracking setup with the default retentions of 24 hours for the job artifact in the dispatch, they quickly go over 4000 artifacts in less than one day.

The workaround is to reduce drastically your jobs expiration time.

  • for the alerts with tracking change the default tracking expiration to 1 hours instead of 24h
    in the manager > searches and reports > "advanced edit" panel
    in the manager > searches and reports > edit page
    in savedsearches.conf

    alert.expires = 1h
    # it was 24h in the defaults

  • for alerts triggering an email to also change the expiration to one hour in your alerts
    in the manager > searches and reports > "advanced edit" panel
    in savedsearches.conf

    action.email.ttl = 3600
    # it was 86400 (24h) in the defaults

View solution in original post

yannK
Splunk Employee
Splunk Employee

The reason is that you have so many jobs with alerting and email and tracking setup with the default retentions of 24 hours for the job artifact in the dispatch, they quickly go over 4000 artifacts in less than one day.

The workaround is to reduce drastically your jobs expiration time.

  • for the alerts with tracking change the default tracking expiration to 1 hours instead of 24h
    in the manager > searches and reports > "advanced edit" panel
    in the manager > searches and reports > edit page
    in savedsearches.conf

    alert.expires = 1h
    # it was 24h in the defaults

  • for alerts triggering an email to also change the expiration to one hour in your alerts
    in the manager > searches and reports > "advanced edit" panel
    in savedsearches.conf

    action.email.ttl = 3600
    # it was 86400 (24h) in the defaults

nnmiller
Contributor

There's also that problem in SHC for version 6.2.6 that can be resolved with an upgrade.
link text

SHC: $SPLUNK_HOME/var/run/splunk/dispatch may fill with old artifacts after 6.2.6 upgrade.

mataharry
Communicator

in my case, it was not a shcluster. but thanks for the hint.

0 Karma
Get Updates on the Splunk Community!

Transform your security operations with Splunk Enterprise Security

Hi Splunk Community, Splunk Platform has set a great foundation for your security operations. With the ...

Splunk Admins and App Developers | Earn a $35 gift card!

Splunk, in collaboration with ESG (Enterprise Strategy Group) by TechTarget, is excited to announce a ...

Enterprise Security Content Update (ESCU) | New Releases

In October, the Splunk Threat Research Team had one release of new security content via the Enterprise ...