Splunk Search

Real time search tuning

daniel333
Builder

hello,

Splunk 6.13/CentOS 6.4

I recently had a Splunk outage. My monitoring software showed, plenty of IO, CPU and RAM available. Yet forwarders were reporting the TCP queues were full on the receiving indexers.

I popped into Splunk on Splunk and looking at my fill Ratios all 4 stages, which are normally 0. The 4th indexing queue was maxed. We actually had lower than average throughput. After some poking around I discovered a set of Real time dashboards were created by our NOC and send out to the general population. Once I disabled RT the queues went right back to 0%.

The abusive RT dashboard aside. I feel there is some performance tuning I am missing. With plenty of system resources available I'd like to undertand why these queues backed up so bad and what I can do get the indexing queue better performance... ideally without installing 10 more indexers 🙂

Tags (2)
0 Karma

daniel333
Builder

I tried both with traditional real time search and indexed_realtime_use_by_default = true and although indexed-realtime-use was slightly better performing in both cases the queues maxed.

0 Karma
Get Updates on the Splunk Community!

Accelerating Observability as Code with the Splunk AI Assistant

We’ve seen in previous posts what Observability as Code (OaC) is and how it’s now essential for managing ...

Integrating Splunk Search API and Quarto to Create Reproducible Investigation ...

 Splunk is More Than Just the Web Console For Digital Forensics and Incident Response (DFIR) practitioners, ...

Congratulations to the 2025-2026 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...