Alerting

Why are real-time scheduled search alert jobs filling my dispatch and how do I prevent this?

mataharry
Communicator
Too many search jobs found in the dispatch directory (found=4596, warning level=4000). This could negatively impact Splunk's performance, consider removing some of the old search jobs. 

We see this error often on my search head.

I tried to clean my jobs and empty the dispatch, but it came back a few hours after.
When looking at the artifacts, they are mostly quite recent (last 24h)
and are real-time scheduled searches linked to alerts.

1 Solution

yannK
Splunk Employee
Splunk Employee

The reason is that you have so many jobs with alerting and email and tracking setup with the default retentions of 24 hours for the job artifact in the dispatch, they quickly go over 4000 artifacts in less than one day.

The workaround is to reduce drastically your jobs expiration time.

  • for the alerts with tracking change the default tracking expiration to 1 hours instead of 24h
    in the manager > searches and reports > "advanced edit" panel
    in the manager > searches and reports > edit page
    in savedsearches.conf

    alert.expires = 1h
    # it was 24h in the defaults

  • for alerts triggering an email to also change the expiration to one hour in your alerts
    in the manager > searches and reports > "advanced edit" panel
    in savedsearches.conf

    action.email.ttl = 3600
    # it was 86400 (24h) in the defaults

View solution in original post

yannK
Splunk Employee
Splunk Employee

The reason is that you have so many jobs with alerting and email and tracking setup with the default retentions of 24 hours for the job artifact in the dispatch, they quickly go over 4000 artifacts in less than one day.

The workaround is to reduce drastically your jobs expiration time.

  • for the alerts with tracking change the default tracking expiration to 1 hours instead of 24h
    in the manager > searches and reports > "advanced edit" panel
    in the manager > searches and reports > edit page
    in savedsearches.conf

    alert.expires = 1h
    # it was 24h in the defaults

  • for alerts triggering an email to also change the expiration to one hour in your alerts
    in the manager > searches and reports > "advanced edit" panel
    in savedsearches.conf

    action.email.ttl = 3600
    # it was 86400 (24h) in the defaults

nnmiller
Contributor

There's also that problem in SHC for version 6.2.6 that can be resolved with an upgrade.
link text

SHC: $SPLUNK_HOME/var/run/splunk/dispatch may fill with old artifacts after 6.2.6 upgrade.

mataharry
Communicator

in my case, it was not a shcluster. but thanks for the hint.

0 Karma
Get Updates on the Splunk Community!

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...