Splunk Search

Scheduler - high delay in dispatching

verbal_666
Builder

Hi there.
Should we have Indexers issue, or SearchHeads ones?
We have many many many (more than 200) scheduled savedsearches, interactive Dashboards running with automatic refreshes etc..
Recently, i saw an high, very high delay over scheduling time, and dispatching the search...

_time                     savedsearch_name  Scheduled_Time        Dispath_Time  Time_Diff
2020-03-12 16:15:19.941 Saved_Search1   03/12/2020 16:05:00 03/12/2020 16:15:19 10:19
2020-03-12 16:15:19.626 Saved_Search2   03/12/2020 16:05:00 03/12/2020 16:15:19 10:19
2020-03-12 16:15:19.446 Saved_Search3   03/12/2020 16:05:00 03/12/2020 16:15:18 10:18
2020-03-12 16:15:19.162 Saved_Search4   03/12/2020 16:05:00 03/12/2020 16:15:18 10:18
[...]

Can the system be improved? How?

Splunk Enterprise 7.0.0
SHs (3 nodes, clustered - no cpu issues)
Indexers (4 nodes, not clustered - some cpu issues, recently we add 2 vCPU per node, issues resolved)

Thanks.

0 Karma
1 Solution

richgalloway
SplunkTrust
SplunkTrust

Splunk recommends the indexer tier have twice as many CPUs as the SH tier. That's to ensure indexers have enough cores available to run searches and index data at the same time.

Consider having your dashboards refresh less often.

If they don't already, change the dashboards to use base searches and post-processing as much as possible. Even better is to load the results of a scheduled search instead of launching new searches each time the dashboard is viewed.

Check the scheduler log for any messages that might explain the delays.

---
If this reply helps you, Karma would be appreciated.

View solution in original post

0 Karma

verbal_666
Builder

I raised the values for limits on all SHs nodes,

[search]
base_max_searches = xx
max_searches_per_cpu = xx

[scheduler]
max_searches_perc = xx

... after restarting, when cluster goes online, dispatching seems to have more efficency.

Monitoring the system.

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Why are the values secret? Someone else might learn from your answer.

---
If this reply helps you, Karma would be appreciated.
0 Karma

verbal_666
Builder

They are not "secret". I raised my values. Anyone can raise their actual values 😉
Bacause they are strictly based on your personal infrastructure.

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Splunk recommends the indexer tier have twice as many CPUs as the SH tier. That's to ensure indexers have enough cores available to run searches and index data at the same time.

Consider having your dashboards refresh less often.

If they don't already, change the dashboards to use base searches and post-processing as much as possible. Even better is to load the results of a scheduled search instead of launching new searches each time the dashboard is viewed.

Check the scheduler log for any messages that might explain the delays.

---
If this reply helps you, Karma would be appreciated.
0 Karma

verbal_666
Builder

Thanks.
So, the problem is "phisiological"... no workaround possible. Tried to optimize the system, already, talking about making Dashboards, by default, to take less range times searchers and make auto refreshes at least at 5m.
We have 12 vCpus per node (Indexers 4*12), already. While SHs run with 6 vCpus per node.
We take care about this... thanks.

0 Karma
Get Updates on the Splunk Community!

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...

Cloud Platform & Enterprise: Classic Dashboard Export Feature Deprecation

As of Splunk Cloud Platform 9.3.2408 and Splunk Enterprise 9.4, classic dashboard export features are now ...