Getting Data In

Why are indexing queues full on the search head, but nothing has been indexing?

Haybuck15
Explorer

Hey Guys,

So, I've got a weird one. According to my monitoring console, the indexing queues on my search head are all pegged at 100%, and have been for a long time. The thing is, nothing's indexing on the thing. It's forwarding internal logs to my indexers, and I'm not running any Summary indexes on it.

Is there a way to figure out what's blocking it up? It's not a huge priority beyond the fact that the system is slow compared to other search heads, and that my boss wants to figure out why it's flagging; partially for academic reasons. Only thing I really have to go on is that on the search head, it's showing my corporate_security role with read access when I check the Introspection API; something that's not on any other system including other search heads.

Note, I don't have a Search Head cluster, but I am running a clustered pair of Indexers.

0 Karma

woodcock
Esteemed Legend

Are ports/ACL/routes in place to allow for your Search Head to send to your indexers (9997/9998)?

0 Karma

esix_splunk
Splunk Employee
Splunk Employee

Do you have an outputs.conf on your SH that forwards its logs to your indexers?

0 Karma

Haybuck15
Explorer

Yes, yes I do. I in fact have one on every Splunk system in the environment that's not a Universal Forwarder.

0 Karma

adonio
Ultra Champion

any errors or warning in internal index for that particular search head?

0 Karma

Haybuck15
Explorer

Nope, none whatsoever. In fact, the largest index on that search head is 3 MB; and that's only because it did some self-indexing before I configured it when it was originally stood up.

0 Karma

adonio
Ultra Champion

the search head outputs its data
try and search;

index = _ internal host = "yourQueuedSearchHead" log_level = warn* OR log_level = error

any results

0 Karma

Haybuck15
Explorer

So, it looks like the search is coming back empty, however I can do the same for every other instance of Splunk in the deployment. Is it possible that the indexing queues would fill up if it can't forward its internal logs? I mean, I wouldn't think so.

0 Karma

adonio
Ultra Champion

wild guess here,
check available disk space on this particular search head
if theres no space, or very minimal it can prevent splunk from indexing locally and sending the data from the search heads to indexers layer

0 Karma

Haybuck15
Explorer

296 GB / 299 GB Free, so that's not it.

0 Karma

jkat54
SplunkTrust
SplunkTrust

There's a typo in adonio's search.

index = _internal host = "yourQueuedSearchHead" log_level = warn* OR log_level = error
0 Karma

Haybuck15
Explorer

I typed it by hand, I didn't copy paste it, so that's not the issue. Just doing the below search pulls back nothing.

index=_* host="yourQueuedSearchHead"

0 Karma

jkat54
SplunkTrust
SplunkTrust

Are you replacing "yourQueuedSearchHead" with the host name of your queued search head?

0 Karma

sbbadri
Motivator

May be you can try clearing dispatch directory or increase indexingqueue size.

0 Karma
Get Updates on the Splunk Community!

Take Your Breath Away with Splunk Risk-Based Alerting (RBA)

WATCH NOW!The Splunk Guide to Risk-Based Alerting is here to empower your SOC like never before. Join Haylee ...

Industry Solutions for Supply Chain and OT, Amazon Use Cases, Plus More New Articles ...

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Enterprise Security Content Update (ESCU) | New Releases

In November, the Splunk Threat Research Team had one release of new security content via the Enterprise ...