Splunk Search

Alerts that go on waiting status causes next alerts to not trigger?



We have several alerts which occasionally go in status waiting (correponding jobs) and stay like that. Then the next executions of these alerts are not triggered of course, so we get quite some skipped jobs.
The jobs overview states the jobs are in status "Parsing", however when I copy the corresponding search and execute in another search window it finishes quite fast. 
Please see also the screenshot  below. It seems to stuck in the following part (last entries in the search.log:
12-05-2022 06:40:02.915 INFO ChunkedExternProcessor [15318 searchOrchestrator]
- Running process: /vol1/opt/splunkdev2/splunk/bin/python3.7


I increased all possible limits and quotas I could come up with to lift any restrictions on the concurrency, but it did not help ...
How would I investigate it further?
Labels (1)
0 Karma
Get Updates on the Splunk Community!

Splunk Training for All: Meet Aspiring Cybersecurity Analyst, Marc Alicea

Splunk Education believes in the value of training and certification in today’s rapidly-changing data-driven ...

Investigate Security and Threat Detection with VirusTotal and Splunk Integration

As security threats and their complexities surge, security analysts deal with increased challenges and ...

Observability Highlights | January 2023 Newsletter

 January 2023New Product Releases Splunk Network Explorer for Infrastructure MonitoringSplunk unveils Network ...