Alerting

Why are real-time scheduled search alert jobs filling my dispatch and how do I prevent this?

mataharry
Communicator
Too many search jobs found in the dispatch directory (found=4596, warning level=4000). This could negatively impact Splunk's performance, consider removing some of the old search jobs. 

We see this error often on my search head.

I tried to clean my jobs and empty the dispatch, but it came back a few hours after.
When looking at the artifacts, they are mostly quite recent (last 24h)
and are real-time scheduled searches linked to alerts.

1 Solution

yannK
Splunk Employee
Splunk Employee

The reason is that you have so many jobs with alerting and email and tracking setup with the default retentions of 24 hours for the job artifact in the dispatch, they quickly go over 4000 artifacts in less than one day.

The workaround is to reduce drastically your jobs expiration time.

  • for the alerts with tracking change the default tracking expiration to 1 hours instead of 24h
    in the manager > searches and reports > "advanced edit" panel
    in the manager > searches and reports > edit page
    in savedsearches.conf

    alert.expires = 1h
    # it was 24h in the defaults

  • for alerts triggering an email to also change the expiration to one hour in your alerts
    in the manager > searches and reports > "advanced edit" panel
    in savedsearches.conf

    action.email.ttl = 3600
    # it was 86400 (24h) in the defaults

View solution in original post

yannK
Splunk Employee
Splunk Employee

The reason is that you have so many jobs with alerting and email and tracking setup with the default retentions of 24 hours for the job artifact in the dispatch, they quickly go over 4000 artifacts in less than one day.

The workaround is to reduce drastically your jobs expiration time.

  • for the alerts with tracking change the default tracking expiration to 1 hours instead of 24h
    in the manager > searches and reports > "advanced edit" panel
    in the manager > searches and reports > edit page
    in savedsearches.conf

    alert.expires = 1h
    # it was 24h in the defaults

  • for alerts triggering an email to also change the expiration to one hour in your alerts
    in the manager > searches and reports > "advanced edit" panel
    in savedsearches.conf

    action.email.ttl = 3600
    # it was 86400 (24h) in the defaults

nnmiller
Contributor

There's also that problem in SHC for version 6.2.6 that can be resolved with an upgrade.
link text

SHC: $SPLUNK_HOME/var/run/splunk/dispatch may fill with old artifacts after 6.2.6 upgrade.

mataharry
Communicator

in my case, it was not a shcluster. but thanks for the hint.

0 Karma
Get Updates on the Splunk Community!

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...