Monitoring Splunk

Scheduler concurrency_limit issue with burst on deferred searches

rafiki
Explorer

Hello community,

My client has experienced a severe issue on a Search Head Cluster on past days due to the scheduler behavior.

We had a scheduled search to long to run between to schedules that was having concurrent jobs running (although its concurrency_limit =1). After a while, the scheduler started a burst of deferred (up ot 26k defer by minute for ~600 unitary savedsearches).

The particulary strange behavior reside for me in the concurrency_limit set on deferred schedules during the burst: 408 instead of usual 1. (408 corresponds to the maximum search concurrency of the SHC (4 SH x102))

The burst terminated by itself after a while.
We experienced several other burst in lower proportion, the big episodes have disapeared after the correction of the scheduled search previously mentionned.

Do you guys have any idea about the reason of the concurrency _limit change on the fly ? (no change performed by human)

The graph of deferred events by concurrency _limit - concurrency _limit=408 is on an overlay to see the global behavior with other values.

index="_internal" AND sourcetype=scheduler AND host=<MySHMaster> status="continued" earliest=-72h
| timechart span=1min count by concurrency_limit

rafiki_0-1637317433203.png

Regards,

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...