"The number of search artifacts in the dispatch directory is higher than recommended (count=5155, warning threshold=5000) and could have an impact on search performance. Remove excess search artifacts using the "splunk clean-dispatch" CLI command, and review artifact retention policies in limits.conf and savedsearches.conf. You can also raise this warning threshold in limits.conf / dispatch_dir_warning_size."
After reviewing for a while in the forum I find that they mention the root /var2/splunk/splunk/var/run/splunk/dispatch where they suggest to erase the oldest
somewhere they mention the jobs where it is recommended that they are not in real time
When entering via the web I find more than 700 jobs of which 10 are in progress and take more than 25 minutes, these jobs correspond to applications such as:
may be some of your alerts are set to live for more number of days , thus Splunk can't delete the jobs.
dispatch.ttl = <integer>[p]
* Indicates the time to live (ttl), in seconds, for the artifacts of the
scheduled search, if no actions are triggered.
* If the integer is followed by the letter 'p', the ttl is calculated as a
multiple of the execution period for the scheduled search.
For example, if the search is scheduled to run hourly and ttl is set to 2p,
the ttl of the artifacts is set to 2 hours.
* If an action is triggered, the ttl is changed to the ttl for the action. If
multiple actions are triggered, the action with the largest ttl is applied
to the artifacts. To set the ttl for an action, refer to the
* For more information on the ttl for a search, see the limits.conf.spec file
[search] stanza ttl setting.
* Default: 2p, which is 2 times the period of the scheduled search
you can make use of below rest query to see if the setting is changed for any of your alerts: