Archive

Search Head not indexing _internal and summarization errors

Path Finder

Our Splunk Search Head is no longer indexing _internal logs (splunkd.log etc), the searches still run but are really slow. We see the following errors:

08-29-2018 11:08:56.126 +0200 WARN  AdminManager - Handler 'summarization' has not performed any capability checks for this operation (requestedAction=list, customAction="", item="").  This may be a bug.
08-29-2018 11:08:58.109 +0200 WARN  AdminManager - Handler 'summarization' has not performed any capability checks for this operation (requestedAction=list, customAction="", item="").  This may be a bug.
08-29-2018 11:08:59.044 +0200 WARN  AdminManager - Handler 'summarization' has not performed any capability checks for this operation (requestedAction=list, customAction="", item="").  This may be a bug.
08-29-2018 11:09:00.047 +0200 WARN  TcpOutputProc - Forwarding to indexer group cluster blocked for 5560 seconds.

Does anyone know what might be causing this?

Tags (1)
0 Karma

Motivator

Have a look at splunkd.log on indexers. Are you seeing this issue for only this host (SH) or also for other hosts?

0 Karma

SplunkTrust
SplunkTrust

how full is the disk of the search head?

0 Karma

SplunkTrust
SplunkTrust

Hi @mmoermans,

Below log clearly indicate that your search head is not able to send data to Indexer.

08-29-2018 11:09:00.047 +0200 WARN  TcpOutputProc - Forwarding to indexer group cluster blocked for 5560 seconds.

There are multiple reasons for this, for example: firewall block between search head and indexer, Various queue on Indexers are full (Due to low IOPS or higher load on indexer for data processing)

I'll suggest to start with telnet command to check whether connectivity between Search Head and Indexer. For indexers queue status, you can use Monitoring Console if you have configured in your environments.

0 Karma

Splunk Employee
Splunk Employee

Keep us posted on this @mmoermans, as I'm sure others have a similar question . Good luck and thanks for posting!

0 Karma

Path Finder

Indexer connection is fine:
TcpOutputProc - Connected to idx=ip:port using ACK.

The queue's are empty and our 12 indexers are below 20% CPU usage.
I/O's seem fine as well, is there something else that might be causing this?

0 Karma

SplunkTrust
SplunkTrust

In that case I'll suggest to open a support case if you have paid for support.

0 Karma
State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!