Alerting

Why are real-time scheduled search alert jobs filling my dispatch and how do I prevent this?

mataharry
Communicator
Too many search jobs found in the dispatch directory (found=4596, warning level=4000). This could negatively impact Splunk's performance, consider removing some of the old search jobs. 

We see this error often on my search head.

I tried to clean my jobs and empty the dispatch, but it came back a few hours after.
When looking at the artifacts, they are mostly quite recent (last 24h)
and are real-time scheduled searches linked to alerts.

1 Solution

yannK
Splunk Employee
Splunk Employee

The reason is that you have so many jobs with alerting and email and tracking setup with the default retentions of 24 hours for the job artifact in the dispatch, they quickly go over 4000 artifacts in less than one day.

The workaround is to reduce drastically your jobs expiration time.

  • for the alerts with tracking change the default tracking expiration to 1 hours instead of 24h
    in the manager > searches and reports > "advanced edit" panel
    in the manager > searches and reports > edit page
    in savedsearches.conf

    alert.expires = 1h
    # it was 24h in the defaults

  • for alerts triggering an email to also change the expiration to one hour in your alerts
    in the manager > searches and reports > "advanced edit" panel
    in savedsearches.conf

    action.email.ttl = 3600
    # it was 86400 (24h) in the defaults

View solution in original post

yannK
Splunk Employee
Splunk Employee

The reason is that you have so many jobs with alerting and email and tracking setup with the default retentions of 24 hours for the job artifact in the dispatch, they quickly go over 4000 artifacts in less than one day.

The workaround is to reduce drastically your jobs expiration time.

  • for the alerts with tracking change the default tracking expiration to 1 hours instead of 24h
    in the manager > searches and reports > "advanced edit" panel
    in the manager > searches and reports > edit page
    in savedsearches.conf

    alert.expires = 1h
    # it was 24h in the defaults

  • for alerts triggering an email to also change the expiration to one hour in your alerts
    in the manager > searches and reports > "advanced edit" panel
    in savedsearches.conf

    action.email.ttl = 3600
    # it was 86400 (24h) in the defaults

nnmiller
Contributor

There's also that problem in SHC for version 6.2.6 that can be resolved with an upgrade.
link text

SHC: $SPLUNK_HOME/var/run/splunk/dispatch may fill with old artifacts after 6.2.6 upgrade.

mataharry
Communicator

in my case, it was not a shcluster. but thanks for the hint.

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...