Splunk Search

Real time search tuning

daniel333
Builder

hello,

Splunk 6.13/CentOS 6.4

I recently had a Splunk outage. My monitoring software showed, plenty of IO, CPU and RAM available. Yet forwarders were reporting the TCP queues were full on the receiving indexers.

I popped into Splunk on Splunk and looking at my fill Ratios all 4 stages, which are normally 0. The 4th indexing queue was maxed. We actually had lower than average throughput. After some poking around I discovered a set of Real time dashboards were created by our NOC and send out to the general population. Once I disabled RT the queues went right back to 0%.

The abusive RT dashboard aside. I feel there is some performance tuning I am missing. With plenty of system resources available I'd like to undertand why these queues backed up so bad and what I can do get the indexing queue better performance... ideally without installing 10 more indexers 🙂

Tags (2)
0 Karma

daniel333
Builder

I tried both with traditional real time search and indexed_realtime_use_by_default = true and although indexed-realtime-use was slightly better performing in both cases the queues maxed.

0 Karma
Get Updates on the Splunk Community!

October Community Champions: A Shoutout to Our Contributors!

As October comes to a close, we want to take a moment to celebrate the people who make the Splunk Community ...

Community Content Calendar, November Edition

Welcome to the November edition of our Community Spotlight! Each month, we dive into the Splunk Community to ...

Stay Connected: Your Guide to November Tech Talks, Office Hours, and Webinars!

What are Community Office Hours? Community Office Hours is an interactive 60-minute Zoom series where ...