Monitoring Splunk

skipped indexing of internal audit...

twinspop
Influencer

I keep getting this message bulletin:

"Skipped indexing of internal audit event will keep dropping events until indexer congestion is remedied. Check disk space and other issues that may cause indexer to block."

This is showing up on my search head, a VM. Its search peers are highly underutilized 16 core beasties with gobs of free SAN disk space and 64 GB of RAM. The search head is configured to forward all events to the cluster and not index anything locally. The search head is also the license master for the cluster.

So is the search head generating this msg? One of the search peers? I've checked splunkd.log but really don't see anything that jumps out at me.

v4.2 linux x86_64 on all systems involved.

Tags (1)
1 Solution

Genti
Splunk Employee
Splunk Employee

this is an issue in 4.2 SPL-37407 which is fixed in the first maintenance release of 4.2.1
are you by any chance forwarding data from the search head to an indexer? is the tcpout queue full and blocked by any chance?

Cheers,
.gz

View solution in original post

tprzelom
Path Finder

If you have an indexer with an outputs.conf file sending to itself it will generate this error and the metrics.log file will show all the queues are full.

Check your serverclass.conf if you're using the deployment server, you may need to create a blacklist entry to prevent the indexer from receiving the outputs file.

chicodeme
Communicator

What did this end up being?

0 Karma

Genti
Splunk Employee
Splunk Employee

this is an issue in 4.2 SPL-37407 which is fixed in the first maintenance release of 4.2.1
are you by any chance forwarding data from the search head to an indexer? is the tcpout queue full and blocked by any chance?

Cheers,
.gz

hcpr
Path Finder

I'm also wondering about this, We have the same problem here.

0 Karma

mznikkip
Engager

if the queues are blocked, how can this be remediated? I am forwarding data from one indexer to another (different environments) so I'm sure this is the problem.

0 Karma

Genti
Splunk Employee
Splunk Employee

try looking into your metrics.log for blocked=true - that should tell you if the queues were blocked.

0 Karma

twinspop
Influencer

Thanks! Yes, the search head was forwarding to the indexers. I've since stopped it since I was losing > 90% of my logs. I don't know how to find out if the TCP queue was full, although I don't see any reason why it should have been.

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...