Getting Data In

Load Balancing

aoliullah
Path Finder

I have set up load balancing with 2 indexers with the ip being 10.0.0.5 and 10.0.0.6.

I didn't specify the autoLB frequency so I guess it would be spending 30 seconds on each indexer.

My issue here is all my data seem to go to 10.0.0.6 first and then 10.0.0.5. Should it not be going the other way round?

Also, I indexed 4MB worth of data and they only seem to be present in the index on 10.0.0.6. Is it right to assume that it was able to index all the data in 30 seconds on 10.0.0.6 so there was no data to index when it moved to 10.0.0.5 hence the index on 10.0.0.5 being empty?

This is my outputs.conf by the way:

[tcpout]
disabled= false
defaultGroup = default-autolb-group

[tcpout:default-autolb-group]
disabled = false
autoLB = true
server = 10.0.0.5:9997,10.0.0.6:9997

0 Karma
1 Solution

martin_mueller
SplunkTrust
SplunkTrust

The order of indexers is not necessarily the order you specify in the file. In the grand scheme of running Splunk for days and years, who gets what first is not that important.

When indexing a small amount, it's likely that 100% ends up on one indexer. 4MB in 30 seconds would need just over 100kb/s, no problem at all. Additionally, when reading one file the forwarder tends to want to finish the file and then switch to another indexer. Again in the grand scheme of monitoring many files, or one regularly appended file over time, things will balance themselves out.

If you want to see the switches in action, try a search like this:

index=_internal host=your_forwarder 10.0.0.*

You should see it connect to one, then the other, and so on. Additionally, try this:

index=_internal group=tcpin_connections

That should show data from both indexers, indicating it has a connection from the forwarder. At the time of indexing your 4MB file you should also see some kb fields go up for the .6 indexer as it received data through that tcpin connection.

TL;DR: Looks fine to me, add more data over time to see balancing.

View solution in original post

martin_mueller
SplunkTrust
SplunkTrust

The order of indexers is not necessarily the order you specify in the file. In the grand scheme of running Splunk for days and years, who gets what first is not that important.

When indexing a small amount, it's likely that 100% ends up on one indexer. 4MB in 30 seconds would need just over 100kb/s, no problem at all. Additionally, when reading one file the forwarder tends to want to finish the file and then switch to another indexer. Again in the grand scheme of monitoring many files, or one regularly appended file over time, things will balance themselves out.

If you want to see the switches in action, try a search like this:

index=_internal host=your_forwarder 10.0.0.*

You should see it connect to one, then the other, and so on. Additionally, try this:

index=_internal group=tcpin_connections

That should show data from both indexers, indicating it has a connection from the forwarder. At the time of indexing your 4MB file you should also see some kb fields go up for the .6 indexer as it received data through that tcpin connection.

TL;DR: Looks fine to me, add more data over time to see balancing.

Get Updates on the Splunk Community!

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...

Cloud Platform & Enterprise: Classic Dashboard Export Feature Deprecation

As of Splunk Cloud Platform 9.3.2408 and Splunk Enterprise 9.4, classic dashboard export features are now ...

Explore the Latest Educational Offerings from Splunk (November Releases)

At Splunk Education, we are committed to providing a robust learning experience for all users, regardless of ...