<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Heavy forwarders are not auto load-balancing evenly in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/Heavy-forwarders-are-not-auto-load-balancing-evenly/m-p/182673#M36603</link>
    <description>&lt;P&gt;The autoLB feature should function pretty well if looked over a longer timespan - So there is probably some other factor here.&lt;/P&gt;

&lt;P&gt;My question to you is :Is there something about indexer 13 that makes it capable of receiving more data in a shorter time than the others? Here are some suggestions.&lt;/P&gt;

&lt;P&gt;Could be faster network cards 10Gbit vs 1Gbit or Trunking on the network cards on indexer 13? Something like that?&lt;BR /&gt;
Powersaving features disabled on the indexer?&lt;BR /&gt;
Are there routing differences or different vlans for the indexers with different load?&lt;BR /&gt;
Are there packetloss on some of the connections to the indexers?&lt;BR /&gt;
Are there queue blocking going on on some of the indexers recieving little data?&lt;/P&gt;

&lt;P&gt;This could have many different causes, but is probably not related to the configuration on the heavy forwarders. &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;</description>
    <pubDate>Tue, 27 Jan 2015 14:37:34 GMT</pubDate>
    <dc:creator>jofe</dc:creator>
    <dc:date>2015-01-27T14:37:34Z</dc:date>
    <item>
      <title>Heavy forwarders are not auto load-balancing evenly</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Heavy-forwarders-are-not-auto-load-balancing-evenly/m-p/182672#M36602</link>
      <description>&lt;P&gt;I'm having a problem right now where I'm not seeing an even distribution across my indexers.  I have 21 indexers (indexer04-indexer24) to which data is coming from six heavy forwarders.&lt;/P&gt;

&lt;P&gt;My outputs.conf on my heavy forwarders looks like this:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;[tcpout:myServerGroup]
autoLBFrequency=15
autoLB=true
disabled=false
forceTimebasedAutoLB=true
writeTimeout=30
maxConnectionsPerIndexer=20
server=indexer04:9996,indexer05:9996,indexer05:9996,&amp;lt;snip&amp;gt;,indexer24:9996
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;However, when I run a simple test search, for example &lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;index=main earliest=-1h@h latest=now() | stats count by splunk_indexer | sort count desc
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;The event count is massively disproportionate  across all the indexers, and indexer13 has twice the events of the next busiest indexer, and the least busy indexers have only a sixth of the events that indexer13 has.  Likewise, our external hardware monitoring reflects indexer13 having a heavier load.&lt;/P&gt;

&lt;P&gt;I've stopped indexer13 temporarily, and the other indexers pick up the slack, but immediately after turning on indexer13 it began being the king of traffic again.&lt;/P&gt;

&lt;P&gt;I've broken it down by heavy-forwarder, and every single one of them seems to send more events to indexer13 as well.  I'm at a loss, indexer04-indexer24 all share the same configuration, though indexer13-24 are beefier on the hardware side as they are newer builds.&lt;/P&gt;

&lt;P&gt;Are there any settings I'm perhaps missing to get this evenly distributed to my indexers?&lt;/P&gt;</description>
      <pubDate>Thu, 14 Aug 2014 18:39:14 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Heavy-forwarders-are-not-auto-load-balancing-evenly/m-p/182672#M36602</guid>
      <dc:creator>rjdargi</dc:creator>
      <dc:date>2014-08-14T18:39:14Z</dc:date>
    </item>
    <item>
      <title>Re: Heavy forwarders are not auto load-balancing evenly</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Heavy-forwarders-are-not-auto-load-balancing-evenly/m-p/182673#M36603</link>
      <description>&lt;P&gt;The autoLB feature should function pretty well if looked over a longer timespan - So there is probably some other factor here.&lt;/P&gt;

&lt;P&gt;My question to you is :Is there something about indexer 13 that makes it capable of receiving more data in a shorter time than the others? Here are some suggestions.&lt;/P&gt;

&lt;P&gt;Could be faster network cards 10Gbit vs 1Gbit or Trunking on the network cards on indexer 13? Something like that?&lt;BR /&gt;
Powersaving features disabled on the indexer?&lt;BR /&gt;
Are there routing differences or different vlans for the indexers with different load?&lt;BR /&gt;
Are there packetloss on some of the connections to the indexers?&lt;BR /&gt;
Are there queue blocking going on on some of the indexers recieving little data?&lt;/P&gt;

&lt;P&gt;This could have many different causes, but is probably not related to the configuration on the heavy forwarders. &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 27 Jan 2015 14:37:34 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Heavy-forwarders-are-not-auto-load-balancing-evenly/m-p/182673#M36603</guid>
      <dc:creator>jofe</dc:creator>
      <dc:date>2015-01-27T14:37:34Z</dc:date>
    </item>
    <item>
      <title>Re: Heavy forwarders are not auto load-balancing evenly</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Heavy-forwarders-are-not-auto-load-balancing-evenly/m-p/182674#M36604</link>
      <description>&lt;P&gt;The issue here ended up being that we were running a version of the heavy forwarders that had a bug -- they'd regularly pick a single indexer preferentially over all others.  We're still in the Splunk5 world, so we went forward a few releases and the problem was solved.&lt;/P&gt;</description>
      <pubDate>Wed, 10 Jun 2015 16:18:56 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Heavy-forwarders-are-not-auto-load-balancing-evenly/m-p/182674#M36604</guid>
      <dc:creator>rjdargi</dc:creator>
      <dc:date>2015-06-10T16:18:56Z</dc:date>
    </item>
  </channel>
</rss>

