<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Heavy Forwarders Dropping Logs in All Apps and Add-ons</title>
    <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Heavy-Forwarders-Dropping-Logs/m-p/490784#M60429</link>
    <description>&lt;P&gt;It appears that you are splitting the output at your HFs.  Whenever you do this, if EITHER of the outputs backs up (as it can and definitely at some point will, when using TCP), then BOTH of our destinations become blocked (because they share a single output queue).  Fix whichever is blocking and then both will catch up.&lt;/P&gt;</description>
    <pubDate>Sat, 14 Mar 2020 18:06:31 GMT</pubDate>
    <dc:creator>woodcock</dc:creator>
    <dc:date>2020-03-14T18:06:31Z</dc:date>
    <item>
      <title>Heavy Forwarders Dropping Logs</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Heavy-Forwarders-Dropping-Logs/m-p/490779#M60424</link>
      <description>&lt;P&gt;I have 2 heavy forwarders receiving UF logs from about 2000 windows servers.  The traffic is being split to our indexers and with a route out via syslog to 2 F5 VIPs.  For a specific server I see about 500k logs in 24 hours in Splunk.  But on the receiving end of the syslog there are only 14 events.  I'm pretty sure the HF's are overloaded and I've put in a request to have 2 more built, but I'm also wondering if there is any further tuning I can do.  I am not finding anything specific to HF's online.  Thanks.&lt;/P&gt;</description>
      <pubDate>Wed, 11 Mar 2020 14:13:52 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Heavy-Forwarders-Dropping-Logs/m-p/490779#M60424</guid>
      <dc:creator>tiaatim</dc:creator>
      <dc:date>2020-03-11T14:13:52Z</dc:date>
    </item>
    <item>
      <title>Re: Heavy Forwarders Dropping Logs</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Heavy-Forwarders-Dropping-Logs/m-p/490780#M60425</link>
      <description>&lt;P&gt;You cannot load-balance &lt;CODE&gt;S2S&lt;/CODE&gt; that way with syslog.  You need &lt;CODE&gt;NiFi&lt;/CODE&gt; or &lt;CODE&gt;cribl&lt;/CODE&gt; OR &lt;CODE&gt;DSP&lt;/CODE&gt; to do it right.&lt;/P&gt;</description>
      <pubDate>Wed, 11 Mar 2020 14:32:26 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Heavy-Forwarders-Dropping-Logs/m-p/490780#M60425</guid>
      <dc:creator>woodcock</dc:creator>
      <dc:date>2020-03-11T14:32:26Z</dc:date>
    </item>
    <item>
      <title>Re: Heavy Forwarders Dropping Logs</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Heavy-Forwarders-Dropping-Logs/m-p/490781#M60426</link>
      <description>&lt;P&gt;taking the load balance out of the equation.  One HF is going to one VIP and the other to a different VIP, not load balancing from the HF to the F5's.  Apologies for the confusion.&lt;/P&gt;</description>
      <pubDate>Wed, 11 Mar 2020 14:34:37 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Heavy-Forwarders-Dropping-Logs/m-p/490781#M60426</guid>
      <dc:creator>tiaatim</dc:creator>
      <dc:date>2020-03-11T14:34:37Z</dc:date>
    </item>
    <item>
      <title>Re: Heavy Forwarders Dropping Logs</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Heavy-Forwarders-Dropping-Logs/m-p/490782#M60427</link>
      <description>&lt;P&gt;I am confused by your statement that your &lt;CODE&gt;HFs'&lt;/CODE&gt; traffic is being split to our indexers and with a route out &lt;CODE&gt;via syslog&lt;/CODE&gt; to 2 F5 VIPs.&lt;BR /&gt;
The &lt;CODE&gt;via syslog&lt;/CODE&gt; part makes no sense to me.  HFs do not talk to indexers &lt;CODE&gt;via syslog&lt;/CODE&gt;; they only talk &lt;CODE&gt;via S2S&lt;/CODE&gt;.  You must be more clear about what you are doing.&lt;/P&gt;</description>
      <pubDate>Sat, 14 Mar 2020 17:14:39 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Heavy-Forwarders-Dropping-Logs/m-p/490782#M60427</guid>
      <dc:creator>woodcock</dc:creator>
      <dc:date>2020-03-14T17:14:39Z</dc:date>
    </item>
    <item>
      <title>Re: Heavy Forwarders Dropping Logs</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Heavy-Forwarders-Dropping-Logs/m-p/490783#M60428</link>
      <description>&lt;P&gt;They are sending a syslog feed to a 3rd party temporarily via a syslog statement in the outputs.conf.  So the HF's are sending to our indexers via regular Splunk:TCP 9997 and also routing out syslog UDP:514.  I found the issue though, the HF's were just overwhelmed with too much data and I offloaded some logsources to another HF.&lt;/P&gt;</description>
      <pubDate>Sat, 14 Mar 2020 17:28:05 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Heavy-Forwarders-Dropping-Logs/m-p/490783#M60428</guid>
      <dc:creator>tiaatim</dc:creator>
      <dc:date>2020-03-14T17:28:05Z</dc:date>
    </item>
    <item>
      <title>Re: Heavy Forwarders Dropping Logs</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Heavy-Forwarders-Dropping-Logs/m-p/490784#M60429</link>
      <description>&lt;P&gt;It appears that you are splitting the output at your HFs.  Whenever you do this, if EITHER of the outputs backs up (as it can and definitely at some point will, when using TCP), then BOTH of our destinations become blocked (because they share a single output queue).  Fix whichever is blocking and then both will catch up.&lt;/P&gt;</description>
      <pubDate>Sat, 14 Mar 2020 18:06:31 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Heavy-Forwarders-Dropping-Logs/m-p/490784#M60429</guid>
      <dc:creator>woodcock</dc:creator>
      <dc:date>2020-03-14T18:06:31Z</dc:date>
    </item>
    <item>
      <title>Re: Heavy Forwarders Dropping Logs</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Heavy-Forwarders-Dropping-Logs/m-p/490785#M60430</link>
      <description>&lt;P&gt;Thanks.  One output is UDP/514 and the other is TCP/9997 to Splunk indexers.  I've been looking to see how I can performance tune the NIC but where would I see that they are being blocked?  I see a bunch of reset errors on the NIC btw.&lt;/P&gt;</description>
      <pubDate>Thu, 09 Apr 2020 21:08:43 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Heavy-Forwarders-Dropping-Logs/m-p/490785#M60430</guid>
      <dc:creator>tiaatim</dc:creator>
      <dc:date>2020-04-09T21:08:43Z</dc:date>
    </item>
  </channel>
</rss>

