Monitoring Splunk

Scheduler concurrency_limit issue with burst on deferred searches

rafiki
Explorer

Hello community,

My client has experienced a severe issue on a Search Head Cluster on past days due to the scheduler behavior.

We had a scheduled search to long to run between to schedules that was having concurrent jobs running (although its concurrency_limit =1). After a while, the scheduler started a burst of deferred (up ot 26k defer by minute for ~600 unitary savedsearches).

The particulary strange behavior reside for me in the concurrency_limit set on deferred schedules during the burst: 408 instead of usual 1. (408 corresponds to the maximum search concurrency of the SHC (4 SH x102))

The burst terminated by itself after a while.
We experienced several other burst in lower proportion, the big episodes have disapeared after the correction of the scheduled search previously mentionned.

Do you guys have any idea about the reason of the concurrency _limit change on the fly ? (no change performed by human)

The graph of deferred events by concurrency _limit - concurrency _limit=408 is on an overlay to see the global behavior with other values.

index="_internal" AND sourcetype=scheduler AND host=<MySHMaster> status="continued" earliest=-72h
| timechart span=1min count by concurrency_limit

rafiki_0-1637317433203.png

Regards,

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...