What parameter can i modify in limits.conf to solve that?
You can't just tweak limits.conf to make your schedules work efficiently.
This symptom usually means that you either have too many scheduled searches (reports, alerts, corellation searches) defined altogether or - more probably - you have them squished into the same "schedule spots" as @kkrises suggested. For example - you're trying to run all your searches at 5 minutes past the hour. It's the typical case and you should spread the execution times more evenly throughout the hour, day, or however often your searches run.
In some cases you could try to run more searches at once but that's more tricky and requires more troubleshooting and diagnostics.
@Valen1 - For delayed searches case, I did the below to fix it.
- Monitoring Console in your Search head would help us to determine why searches are delayed. Inside monitoring console, go to Search> Scheduler Activity: Instance. Look for Skip ratio under Runtime statistics dashlet.
- Identified searches trying to run at the same time. Reschedule by tweaking the cron schedules of them and skip ratio is reduced.
- Identify searches not completing before the next scheduled run-time, run them in Search app and find the average time taken to complete it. For larger indexes or datamodels especially for network, try to minimize the earliest time range to 1 hour or use summary indexes.
Hope this helps and an upvote is appreciated. Thank you.
@Valen1 - It's not under limits.conf, instead you can find it under "Set feature indicator threshold" under Health Check of Monitoring Console.
Refer - https://docs.splunk.com/Documentation/Splunk/9.0.0/DMC/Configurefeaturemonitoring
Though I would say this usually indicates, slow Search Head or Indexers in performing searches.
I hope this helps!!!