<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic 8.0.1 upgraded Heavy Forwarder- TcpOutputProc - Possible duplication of events in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/8-0-1-upgraded-Heavy-Forwarder-TcpOutputProc-Possible/m-p/480531#M82380</link>
    <description>&lt;P&gt;We have a support ticket open, but I thought I'd also ask the community. Since upgrading our Splunk to 8.0.1 this one HF has been spewing "TcpOutputProc - Possible duplication of events " for most channels.  As well as "TcpOutputProc - Applying quarantine to ip=xx.xx.xx.xx port=9998 _numberOfFailures=2"&lt;/P&gt;

&lt;P&gt;We upgraded on the 15th near midnight.  This is a count of those the errors from that host. &lt;BR /&gt;
2020-02-14  0&lt;BR /&gt;
2020-02-15  623&lt;BR /&gt;
2020-02-16  923874&lt;BR /&gt;
2020-02-17  396920&lt;BR /&gt;
2020-02-18  678568&lt;BR /&gt;
2020-02-19  602100&lt;BR /&gt;
2020-02-20  459284&lt;BR /&gt;
2020-02-21  1177642&lt;/P&gt;

&lt;P&gt;Here is a count from the indexer cluster showing the number of blocked=true events. One would expect these to be similar in count if the indexers were telling the HF to go elsewhere because it's queues were full. &lt;/P&gt;

&lt;P&gt;index=_internal host=INDEXERNAMES sourcetype=splunkd source=/opt/splunk/var/log/splunk/metrics.log blocked=true component=Metrics&lt;BR /&gt;
| timechart span=1d count by source&lt;/P&gt;

&lt;P&gt;2020-02-14  7&lt;BR /&gt;
2020-02-15  180&lt;BR /&gt;
2020-02-16  260&lt;BR /&gt;
2020-02-17  15&lt;BR /&gt;
2020-02-18  18&lt;BR /&gt;
2020-02-19  2415&lt;BR /&gt;
2020-02-20  1&lt;BR /&gt;
2020-02-21  2&lt;/P&gt;

&lt;P&gt;Lastly, it's not just one source or channel, it's everything from the host.&lt;/P&gt;

&lt;P&gt;index=_internal component=TcpOutputProc host=ghdsplfwd01lps log_level=WARN  duplication&lt;BR /&gt;
| rex field=event_message "channel=source::(?[^|]+)"&lt;BR /&gt;
| stats count by channel&lt;/P&gt;

&lt;P&gt;/opt/splunk/var/log/introspection/disk_objects.log  51395&lt;BR /&gt;
/opt/splunk/var/log/introspection/resource_usage.log    45470&lt;BR /&gt;
mule-prod-analytics 42192&lt;BR /&gt;
/opt/splunk/var/log/splunk/metrics.log  28283&lt;BR /&gt;
web_ping://PROD_CommerceHub 27881&lt;BR /&gt;
web_ping://V8_PROD_CustomSolr5  27877&lt;BR /&gt;
web_ping://V8_PROD_WebServer4   27873&lt;BR /&gt;
web_ping://EnterWorks PRD   27871&lt;BR /&gt;
web_ping://RTP DEV  27870&lt;BR /&gt;
web_ping://Ensighten    27869&lt;BR /&gt;
web_ping://RTP  27867&lt;BR /&gt;
bandwidth   20570&lt;BR /&gt;
cpu 19949&lt;BR /&gt;
iostat  19946&lt;BR /&gt;
ps  19821&lt;/P&gt;

&lt;P&gt;Any ideas? &lt;/P&gt;</description>
    <pubDate>Wed, 30 Sep 2020 04:22:39 GMT</pubDate>
    <dc:creator>JDukeSplunk</dc:creator>
    <dc:date>2020-09-30T04:22:39Z</dc:date>
    <item>
      <title>8.0.1 upgraded Heavy Forwarder- TcpOutputProc - Possible duplication of events</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/8-0-1-upgraded-Heavy-Forwarder-TcpOutputProc-Possible/m-p/480531#M82380</link>
      <description>&lt;P&gt;We have a support ticket open, but I thought I'd also ask the community. Since upgrading our Splunk to 8.0.1 this one HF has been spewing "TcpOutputProc - Possible duplication of events " for most channels.  As well as "TcpOutputProc - Applying quarantine to ip=xx.xx.xx.xx port=9998 _numberOfFailures=2"&lt;/P&gt;

&lt;P&gt;We upgraded on the 15th near midnight.  This is a count of those the errors from that host. &lt;BR /&gt;
2020-02-14  0&lt;BR /&gt;
2020-02-15  623&lt;BR /&gt;
2020-02-16  923874&lt;BR /&gt;
2020-02-17  396920&lt;BR /&gt;
2020-02-18  678568&lt;BR /&gt;
2020-02-19  602100&lt;BR /&gt;
2020-02-20  459284&lt;BR /&gt;
2020-02-21  1177642&lt;/P&gt;

&lt;P&gt;Here is a count from the indexer cluster showing the number of blocked=true events. One would expect these to be similar in count if the indexers were telling the HF to go elsewhere because it's queues were full. &lt;/P&gt;

&lt;P&gt;index=_internal host=INDEXERNAMES sourcetype=splunkd source=/opt/splunk/var/log/splunk/metrics.log blocked=true component=Metrics&lt;BR /&gt;
| timechart span=1d count by source&lt;/P&gt;

&lt;P&gt;2020-02-14  7&lt;BR /&gt;
2020-02-15  180&lt;BR /&gt;
2020-02-16  260&lt;BR /&gt;
2020-02-17  15&lt;BR /&gt;
2020-02-18  18&lt;BR /&gt;
2020-02-19  2415&lt;BR /&gt;
2020-02-20  1&lt;BR /&gt;
2020-02-21  2&lt;/P&gt;

&lt;P&gt;Lastly, it's not just one source or channel, it's everything from the host.&lt;/P&gt;

&lt;P&gt;index=_internal component=TcpOutputProc host=ghdsplfwd01lps log_level=WARN  duplication&lt;BR /&gt;
| rex field=event_message "channel=source::(?[^|]+)"&lt;BR /&gt;
| stats count by channel&lt;/P&gt;

&lt;P&gt;/opt/splunk/var/log/introspection/disk_objects.log  51395&lt;BR /&gt;
/opt/splunk/var/log/introspection/resource_usage.log    45470&lt;BR /&gt;
mule-prod-analytics 42192&lt;BR /&gt;
/opt/splunk/var/log/splunk/metrics.log  28283&lt;BR /&gt;
web_ping://PROD_CommerceHub 27881&lt;BR /&gt;
web_ping://V8_PROD_CustomSolr5  27877&lt;BR /&gt;
web_ping://V8_PROD_WebServer4   27873&lt;BR /&gt;
web_ping://EnterWorks PRD   27871&lt;BR /&gt;
web_ping://RTP DEV  27870&lt;BR /&gt;
web_ping://Ensighten    27869&lt;BR /&gt;
web_ping://RTP  27867&lt;BR /&gt;
bandwidth   20570&lt;BR /&gt;
cpu 19949&lt;BR /&gt;
iostat  19946&lt;BR /&gt;
ps  19821&lt;/P&gt;

&lt;P&gt;Any ideas? &lt;/P&gt;</description>
      <pubDate>Wed, 30 Sep 2020 04:22:39 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/8-0-1-upgraded-Heavy-Forwarder-TcpOutputProc-Possible/m-p/480531#M82380</guid>
      <dc:creator>JDukeSplunk</dc:creator>
      <dc:date>2020-09-30T04:22:39Z</dc:date>
    </item>
    <item>
      <title>Re: 8.0.1 upgraded Heavy Forwarder- TcpOutputProc - Possible duplication of events</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/8-0-1-upgraded-Heavy-Forwarder-TcpOutputProc-Possible/m-p/480532#M82381</link>
      <description>&lt;P&gt;The HF is still "sick" but here are some things we did that seemed to help.&lt;/P&gt;

&lt;OL&gt;
&lt;LI&gt;Edited the outputs.conf that this HF used to output to forwarders within it's own site. &lt;/LI&gt;
&lt;LI&gt;Removed useACK=true from outputs.conf &lt;/LI&gt;
&lt;/OL&gt;

&lt;P&gt;I'm a little concerned about #2 there. We could still be having issues with the outputs, only now the events are being dropped on the floor. In other words the condition may still be present, we have simply turned off the logging by removing useAck. &lt;/P&gt;</description>
      <pubDate>Fri, 06 Mar 2020 15:11:26 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/8-0-1-upgraded-Heavy-Forwarder-TcpOutputProc-Possible/m-p/480532#M82381</guid>
      <dc:creator>JDukeSplunk</dc:creator>
      <dc:date>2020-03-06T15:11:26Z</dc:date>
    </item>
    <item>
      <title>Re: 8.0.1 upgraded Heavy Forwarder- TcpOutputProc - Possible duplication of events</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/8-0-1-upgraded-Heavy-Forwarder-TcpOutputProc-Possible/m-p/480533#M82382</link>
      <description>&lt;P&gt;Hi&lt;/P&gt;

&lt;P&gt;If you have many separate transforms on props.conf for individual source/source type etc. try to combine those to one line e.g.&lt;/P&gt;

&lt;P&gt;TRANSFORMS-foo = foo1&lt;BR /&gt;
TRANSFORMS-bar = bar1&lt;/P&gt;

&lt;P&gt;To&lt;/P&gt;

&lt;P&gt;TRANSFORMS-foobar = foo1, bar1&lt;/P&gt;

&lt;P&gt;This helps in our case after update 6.6.5 to 7.3.3.&lt;/P&gt;

&lt;P&gt;Ismo&lt;/P&gt;</description>
      <pubDate>Fri, 06 Mar 2020 18:57:37 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/8-0-1-upgraded-Heavy-Forwarder-TcpOutputProc-Possible/m-p/480533#M82382</guid>
      <dc:creator>isoutamo</dc:creator>
      <dc:date>2020-03-06T18:57:37Z</dc:date>
    </item>
  </channel>
</rss>

