Monitoring Splunk

Scheduler concurrency_limit issue with burst on deferred searches

rafiki
Explorer

Hello community,

My client has experienced a severe issue on a Search Head Cluster on past days due to the scheduler behavior.

We had a scheduled search to long to run between to schedules that was having concurrent jobs running (although its concurrency_limit =1). After a while, the scheduler started a burst of deferred (up ot 26k defer by minute for ~600 unitary savedsearches).

The particulary strange behavior reside for me in the concurrency_limit set on deferred schedules during the burst: 408 instead of usual 1. (408 corresponds to the maximum search concurrency of the SHC (4 SH x102))

The burst terminated by itself after a while.
We experienced several other burst in lower proportion, the big episodes have disapeared after the correction of the scheduled search previously mentionned.

Do you guys have any idea about the reason of the concurrency _limit change on the fly ? (no change performed by human)

The graph of deferred events by concurrency _limit - concurrency _limit=408 is on an overlay to see the global behavior with other values.

index="_internal" AND sourcetype=scheduler AND host=<MySHMaster> status="continued" earliest=-72h
| timechart span=1min count by concurrency_limit

rafiki_0-1637317433203.png

Regards,

0 Karma
Get Updates on the Splunk Community!

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...