Getting Data In

Unbalanced search load on indexer cluster

nwales
Path Finder

I have six indexers, one search head and a cluster manager on different hardware.

During quiet times in terms of user searches the indexers all show similar load. As soon as people start looking at the UI and running searches the load on 2 and sometimes 3 of the indexers rockets to huge load averages and the number of searches is much higher compared to the rest of the indexers which appear to be doing almost nothing.

Is there anything I can do about this?

Tags (2)
0 Karma

riqbal47010
Path Finder

How can we identify that data is being forwarded to all indexers and both LB values justified specifically for syslog data.

0 Karma

riqbal47010
Path Finder

How can we identify that data is being forwarded to all indexers and both LB values justified specifically for syslog data.

0 Karma

riqbal47010
Path Finder

How can we identify that data is being forwarded to all indexers and both LB values justified specifically for syslog data.

0 Karma

mahamed_splunk
Splunk Employee
Splunk Employee

This generally happens if your forwarders sending the data to 3 indexers only and it gets replicated to the other 3 remaining indexers. By default, the indexer that receives the data from forwarder acts as the primary indexer for the data and will answer all search requests.

The best practice recommendation is to spray your data from forwarders to all the indexers in the pool. This will make sure that all indexers are actively participating in the searches and share the load

0 Karma

nwales
Path Finder

We use DNS round robin across all six indexers, which are identical physically and according to the S.o.S app the indexed volumes are comparable across the cluster, so I don't think it is that.

Right now we have 5 indexers running mostly idle, with between 7 and 10 splunk processes and one with a load average of 90 and 55 splunk processes (has 32 logical cores).

At other times we have had 2 or three running very hot while the others remain idle which causes major issues with front end searching to the point it is almost unusable.

0 Karma
.conf21 CFS Extended through 5/20!

Don't miss your chance
to share your Splunk
wisdom in-person or
virtually at .conf21!

Call for Speakers has
been extended through
Thursday, 5/20!