Getting Data In

Unbalanced search load on indexer cluster

nwales
Path Finder

I have six indexers, one search head and a cluster manager on different hardware.

During quiet times in terms of user searches the indexers all show similar load. As soon as people start looking at the UI and running searches the load on 2 and sometimes 3 of the indexers rockets to huge load averages and the number of searches is much higher compared to the rest of the indexers which appear to be doing almost nothing.

Is there anything I can do about this?

Tags (2)
0 Karma

riqbal47010
Path Finder

How can we identify that data is being forwarded to all indexers and both LB values justified specifically for syslog data.

0 Karma

riqbal47010
Path Finder

How can we identify that data is being forwarded to all indexers and both LB values justified specifically for syslog data.

0 Karma

riqbal47010
Path Finder

How can we identify that data is being forwarded to all indexers and both LB values justified specifically for syslog data.

0 Karma

mahamed_splunk
Splunk Employee
Splunk Employee

This generally happens if your forwarders sending the data to 3 indexers only and it gets replicated to the other 3 remaining indexers. By default, the indexer that receives the data from forwarder acts as the primary indexer for the data and will answer all search requests.

The best practice recommendation is to spray your data from forwarders to all the indexers in the pool. This will make sure that all indexers are actively participating in the searches and share the load

0 Karma

nwales
Path Finder

We use DNS round robin across all six indexers, which are identical physically and according to the S.o.S app the indexed volumes are comparable across the cluster, so I don't think it is that.

Right now we have 5 indexers running mostly idle, with between 7 and 10 splunk processes and one with a load average of 90 and 55 splunk processes (has 32 logical cores).

At other times we have had 2 or three running very hot while the others remain idle which causes major issues with front end searching to the point it is almost unusable.

0 Karma
Get Updates on the Splunk Community!

Splunk Observability as Code: From Zero to Dashboard

For the details on what Self-Service Observability and Observability as Code is, we have some awesome content ...

[Puzzles] Solve, Learn, Repeat: Character substitutions with Regular Expressions

This challenge was first posted on Slack #puzzles channelFor BORE at .conf23, we had a puzzle question which ...

Shape the Future of Splunk: Join the Product Research Lab!

Join the Splunk Product Research Lab and connect with us in the Slack channel #product-research-lab to get ...