Getting Data In

How to split one massive log file across multiple indexers?

thisissplunk
Builder

Is it possible to ingest one huge log file (100gb uncompressed) and round robin CHUNKS of the data to multiple indexers?

Right now, each archive log file that is read in by the forwarder is sent to one indexer. For me, this means that one month's worth of logs (100gb) are going to one indexer, defeating any efficiency boost of having multiple indexers.

Edit: I realize my round robin statement before my edit may have confused people. My current process IS round robining the entire files two my two different indexers. I don't want this, it's sending 100gb file to one, then another 100gb file to another. I want it to "round robin" CHUNKS of the files instead. I want it to switch what indexer it's passing data to every 100-500mb or so.

DETAILED ANSWER: Answer was two fold:

  • Use "autoLBVolume = X" in outputs.conf
  • Use "EVENT_BREAKER_ENABLE = true" in the sourcetype in props.conf so that splunk doesn't see the file as one huge stream (which are unable to be load balanced). See forceTimebasedAutoLB in outputs.conf for more details there.
0 Karma
1 Solution

micahkemp
Champion

Considering the update to the question, look into this setting in outputs.conf:

autoLBVolume = <bytes>
* After the forwarder sends data of autoLBVolume to some indexer, a new indexer is selected randomly from the
  list of indexers provided in the server attribute of the target group
  stanza.
* autoLBVolume is closely related to autoLBFrequency. autoLBVolume is first used to determine if the forwarder needs
  to pick another indexer. If the autoLBVolume is not reached, but the autoLBFrequency is reached, the forwarder will
  switch to another indexer as the forwarding target.
* A non-zero value means the volume based forwarding is turned on, and value 0 means the volume based forwarding is turned off.
* Defaults to 0 (bytes).

View solution in original post

thisissplunk
Builder

I have restarted splunk (needed to for the batch stanza to kick off), and these are the values of the LB stuff the furthest down in that output:

autoLBFrequency = 10
autoLBVolume = 10000000

Now that I think about it, the LBFrequency was never working either, as it take many minutes for these single files to hit the indexer.

Edit: Ahhh, I think I need to do this because the files are one stream?
forceTimebasedAutoLB = [true|false]
* Forces existing streams to switch to newly elected indexer every
AutoLB cycle.
* On universal forwarders, use the EVENT_BREAKER_ENABLE and
EVENT_BREAKER settings in props.conf rather than forceTimebasedAutoLB
for improved load balancing, line breaking, and distribution of events.
* Defaults to false.

Or rather, this:
EVENT_BREAKER_ENABLE = [true|false]
* When set to true, Splunk will split incoming data with a light-weight
chunked line breaking processor so that data is distributed fairly evenly
amongst multiple indexers. Use this setting on the UF to indicate that
data should be split on event boundaries across indexers especially
for large files.
* Defaults to false

SUCCESS. The EVENT_BREAKER_ENABLE in the sourcetype stanza in props.conf on the universal forwarder did the trick.

0 Karma

micahkemp
Champion

Considering the update to the question, look into this setting in outputs.conf:

autoLBVolume = <bytes>
* After the forwarder sends data of autoLBVolume to some indexer, a new indexer is selected randomly from the
  list of indexers provided in the server attribute of the target group
  stanza.
* autoLBVolume is closely related to autoLBFrequency. autoLBVolume is first used to determine if the forwarder needs
  to pick another indexer. If the autoLBVolume is not reached, but the autoLBFrequency is reached, the forwarder will
  switch to another indexer as the forwarding target.
* A non-zero value means the volume based forwarding is turned on, and value 0 means the volume based forwarding is turned off.
* Defaults to 0 (bytes).

thisissplunk
Builder

Hmm so I implemented this but it doesn't seem to be working. Does my outputs.conf look correct under "opt/splunkforwarder/etc/system/local/outputs.conf":

[tcpout]
defaultGroup = default-autolb-group

[tcpout:default-autolb-group]
server = X.X.X.1:9997,X.X.X.2:9997
autoLBVolume = 10000000

Or is there any way to test what the value is live? it still seems to be sending entire 100gb files to one indexer at a time. In fact, it doesn't seem like the default "autoLBFrequency = 30" is working either. Maybe it's something with my architecture?

0 Karma

micahkemp
Champion

That looks right at first glance. You may want to run:

splunk btool outputs list --debug

just to make sure it is actually taking precedence, but I can't imagine it's not.

And I have to ask, did you restart splunk on this forwarder?

0 Karma

thisissplunk
Builder

Thanks, this was it. Should have taken a deeper look at outputs!

0 Karma

ddrillic
Ultra Champion

You are probably ending up using the batch reader mode. It's painful ; -)

My experience with it at Where does the forwarder enqueue files?

0 Karma

mayurr98
Super Champion

hey @thisissplunk

Run this command on the forwarder /opt/splunkforwarder/bin:

./splunk add forward-server indexer1:9997 -method autobalance
./splunk add forward-server indexer2:9997 -method autobalance
./splunk add forward-server indexer3:9997 -method autobalance
./splunk add forward-server indexer4:9997 -method autobalance
...............and so on (likewise do this for all indexers)

This will automatically write your outputs.conf file with auto loab balancing
As you have mentioned in your question about round robin let me tell you that Round-robin method of load balancing is not supported anymore.
you can find this in
https://docs.splunk.com/Documentation/Splunk/7.0.1/Admin/Outputsconf

I hope that helps you!

0 Karma

micahkemp
Champion

In outputs.conf you can configure multiple indexers to forward to, which will somewhat accomplish load balancing. From the outputs.conf doc:

[tcpout:indexers]
server = 10.1.1.197:9997, 10.1.1.200:9997

This should result in your data being split among multiple indexers, even though it was one input file.

0 Karma

thisissplunk
Builder

This is what we do right now and it's sending each huge file to one indexer. I'd like it to take the huge file and split it across all of our indexers.

0 Karma
Get Updates on the Splunk Community!

New Case Study Shows the Value of Partnering with Splunk Academic Alliance

The University of Nevada, Las Vegas (UNLV) is another premier research institution helping to shape the next ...

How to Monitor Google Kubernetes Engine (GKE)

We’ve looked at how to integrate Kubernetes environments with Splunk Observability Cloud, but what about ...

Index This | How can you make 45 using only 4?

October 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...