Monitoring Splunk

Scheduler concurrency_limit issue with burst on deferred searches

rafiki
Explorer

Hello community,

My client has experienced a severe issue on a Search Head Cluster on past days due to the scheduler behavior.

We had a scheduled search to long to run between to schedules that was having concurrent jobs running (although its concurrency_limit =1). After a while, the scheduler started a burst of deferred (up ot 26k defer by minute for ~600 unitary savedsearches).

The particulary strange behavior reside for me in the concurrency_limit set on deferred schedules during the burst: 408 instead of usual 1. (408 corresponds to the maximum search concurrency of the SHC (4 SH x102))

The burst terminated by itself after a while.
We experienced several other burst in lower proportion, the big episodes have disapeared after the correction of the scheduled search previously mentionned.

Do you guys have any idea about the reason of the concurrency _limit change on the fly ? (no change performed by human)

The graph of deferred events by concurrency _limit - concurrency _limit=408 is on an overlay to see the global behavior with other values.

index="_internal" AND sourcetype=scheduler AND host=<MySHMaster> status="continued" earliest=-72h
| timechart span=1min count by concurrency_limit

rafiki_0-1637317433203.png

Regards,

0 Karma
Get Updates on the Splunk Community!

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...

Built-in Service Level Objectives Management to Bridge the Gap Between Service & ...

Wednesday, May 29, 2024  |  11AM PST / 2PM ESTRegister now and join us to learn more about how you can ...

Get Your Exclusive Splunk Certified Cybersecurity Defense Engineer Certification at ...

We’re excited to announce a new Splunk certification exam being released at .conf24! If you’re headed to Vegas ...