<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Need suggestions troubleshooting periodic dropping of logs from a forwarder. On indexer &amp;quot;Local side shutting down&amp;quot; in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/Need-suggestions-troubleshooting-periodic-dropping-of-logs-from/m-p/441744#M77015</link>
    <description>&lt;P&gt;I checked the file handles thing after reading this and that all looks okay. Thanks for the suggestion.&lt;/P&gt;

&lt;P&gt;Here is another interesting one (like from that one): &lt;A href="https://answers.splunk.com/answers/43285/error-tcpinputproc-error-encountered-for-connection-from-timeout.html"&gt;https://answers.splunk.com/answers/43285/error-tcpinputproc-error-encountered-for-connection-from-timeout.html&lt;/A&gt;&lt;/P&gt;</description>
    <pubDate>Wed, 24 Oct 2018 21:37:01 GMT</pubDate>
    <dc:creator>wrangler2x</dc:creator>
    <dc:date>2018-10-24T21:37:01Z</dc:date>
    <item>
      <title>Need suggestions troubleshooting periodic dropping of logs from a forwarder. On indexer "Local side shutting down"</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Need-suggestions-troubleshooting-periodic-dropping-of-logs-from/m-p/441741#M77012</link>
      <description>&lt;P&gt;I've got a metrics alert that runs every hour and sends me an email when the volume in my dhcp index is over a certain amount. The alert uses the stats command to sum MB (via &lt;CODE&gt;| eval MB=round(kb/1024,3)&lt;/CODE&gt;) for series="dhcp" and then uses &lt;CODE&gt;where MB &amp;gt; max&lt;/CODE&gt;. This tripped &lt;STRONG&gt;yesterday&lt;/STRONG&gt; so I went to do a top by MAC and DevName in the dhcp index and nothing returned in search for the last hour. So there are two curiosities here:&lt;/P&gt;

&lt;OL&gt;
&lt;LI&gt;Why do the metrics data show indexed log data when...&lt;/LI&gt;
&lt;LI&gt;the dhcp index doesn't have indexed log data during the same period?&lt;/LI&gt;
&lt;/OL&gt;

&lt;P&gt;Meanwhile, I checked for a connection on the indexer from the forwarder on the SSL port we use to take logs, and it was there (established). Wondering whether it was a forwarder problem (which has never been problematic before) or something wrong on the indexer, I finally decided to restart splunkd on the indexer (mainly because the admin for the forwarder was out sick). Voila, I started seeing logs in the dhcp index again.&lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;This morning&lt;/STRONG&gt; when I came in, I checked the dhcp index and no logs. I ran this search for metrics (pretty much the alert search without the where clause) with timepicker set to Last 60 minutes:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;index=_internal source="/opt/splunk/var/log/splunk/metrics.log*" group=per_index_thruput series="dhcp"
| rename series as index
| eval MB=round(kb/1024,3)
| stats sum(MB) as MB by index
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;It came back with 200 MB. Crazy weird just like yesterday morning. According to this search the most recently received logs are the current time:&lt;/P&gt;

&lt;P&gt;| metadata type=hosts index=dhcp | search host=dhcp*&lt;BR /&gt;
| stats count by host totalCount firstTime recentTime&lt;BR /&gt;
| convert timeformat="%m/%d/%Y %T" ctime(recentTime) ctime(firstTime)&lt;BR /&gt;
| rename totalCount as Count recentTime as "Last Update" firstTime as "Earliest Update"&lt;BR /&gt;
| fieldformat Count=tostring(Count, "commas")&lt;BR /&gt;
| fields - count&lt;/P&gt;

&lt;P&gt;As in when I just ran this it was 10:15 and 'Last Update' came back with 10:15:19. Yet &lt;CODE&gt;index=dhcp&lt;/CODE&gt; search comes back with nothing.&lt;/P&gt;

&lt;P&gt;Also, looking at Settings-&amp;gt;indexes, I see that the &lt;STRONG&gt;Latest Event&lt;/STRONG&gt; for the dhcp index is shown as four hours ago (now 10:33 in the morning).&lt;/P&gt;

&lt;P&gt;I took a look at splunkd.log, greping it for the IP of the forwarder. There are some fairly recent (8:28 this morning) ClientSessionsManager action=Phonehome result=Ok checksum=0 log entries, and then there is this shorthereafterater:&lt;/P&gt;

&lt;P&gt;&lt;CODE&gt;10-23-2018 08:29:09.302 -0700 ERROR TcpInputProc - Error encountered for connection from src=IP_redacted:39683. Local side shutting down&lt;/CODE&gt;&lt;/P&gt;

&lt;P&gt;I've redacted the IP address in this, but it is the IP of the forwarder.&lt;/P&gt;

&lt;P&gt;The admin for the forwarder (a solaris 10 box running 6.2.1.4 currently) was in today and I got a look at the logs. I see it connected to my indexer yesterday after I restarted splunkd there, then I'm seeing this in the forwarder's logs from this morning (nothing after that connection yesterday until these):&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;10-23-2018 08:28:19.635 -0700 INFO  TcpOutputProc - Connection to 128.195.xxx.xxx:9998 closed. default Error in SSL_read = 131, SSL Error = error:00000000:lib(0):func(0):reason(0)
10-23-2018 08:28:19.799 -0700 INFO  TcpOutputProc - Connected to idx=128.195.xxx.xxx:9998
10-23-2018 08:28:35.707 -0700 INFO  TcpOutputProc - Connection to 128.195.xxx.xxx:9998 closed. default Error in SSL_read = 131, SSL Error = error:00000000:lib(0):func(0):reason(0)
10-23-2018 08:28:35.909 -0700 INFO  TcpOutputProc - Connected to idx=128.195.xxx.xxx:9998
10-23-2018 08:29:07.386 -0700 INFO  TcpOutputProc - Connection to 128.195.xxx.xxx:9998 closed. sock_error = 32. SSL Error = error:00000000:lib(0):func(0):reason(0)
10-23-2018 08:29:07.518 -0700 WARN  TcpOutputProc - Applying quarantine to ip=128.195.xxx.xxx port=9998 _numberOfFailures=2
10-23-2018 08:29:54.178 -0700 INFO  TcpOutputProc - Removing quarantine from idx=128.195.xxx.xxx:9998
10-23-2018 08:29:54.339 -0700 INFO  TcpOutputProc - Connected to idx=128.195.xxx.xxx:9998
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;I asked the admin to restart splunkd on the solaris box, and he did. I see a 'INFO  PubSubSvr - Subscribed' phonehome in my splunkd logs and I see a new connection from the forwarder on my indexer. I don't see any errors involving the forwarder's IP address in my indexer's splunkd logs. However, I still do not see any logs in the dhcp index.&lt;/P&gt;

&lt;P&gt;Oh, wait. Another quick look at the Settings-&amp;gt;Indexes shows the &lt;STRONG&gt;Latest Event&lt;/STRONG&gt; for the dhcp index as three hours ago -- it was four the last time I looked. Switching the search on the dhcp index to All time (real time) I can see logs streaming in with timestamps from earlier this morning -- it is playing "catch up," with events coming in ~8,000/minute.&lt;/P&gt;

&lt;P&gt;My indexer is running RHEL 7 and:&lt;/P&gt;

&lt;P&gt;$ cat etc/splunk.version&lt;BR /&gt;
VERSION=6.5.2&lt;BR /&gt;
BUILD=67571ef4b87d&lt;BR /&gt;
PRODUCT=splunk&lt;BR /&gt;
PLATFORM=Linux-x86_64&lt;/P&gt;

&lt;P&gt;The forwarder has been faithfully sending me logs until this business began yesterday.&lt;/P&gt;

&lt;P&gt;Any ideas what has been going wrong?&lt;/P&gt;</description>
      <pubDate>Tue, 23 Oct 2018 18:13:19 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Need-suggestions-troubleshooting-periodic-dropping-of-logs-from/m-p/441741#M77012</guid>
      <dc:creator>wrangler2x</dc:creator>
      <dc:date>2018-10-23T18:13:19Z</dc:date>
    </item>
    <item>
      <title>Re: Need suggestions troubleshooting periodic dropping of logs from a forwarder. On indexer "Local side shutting down"</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Need-suggestions-troubleshooting-periodic-dropping-of-logs-from/m-p/441742#M77013</link>
      <description>&lt;P&gt;More trouble last night. After working well all day, you see this in the logs (on the indexer/deployment server):&lt;/P&gt;

&lt;P&gt;10-23-2018 11:01:07.879 -0700 ERROR TcpInputProc - Error encountered for connection from src=128.200.xx.xxx:35411. Local side shutting down&lt;BR /&gt;
10-23-2018 11:01:21.860 -0700 ERROR TcpInputProc - Error encountered for connection from src=128.200.xx.xxx:43490. Local side shutting down&lt;BR /&gt;
10-23-2018 11:01:40.685 -0700 ERROR TcpInputProc - Error encountered for connection from src=128.200.xx.xxx:43684. Local side shutting down&lt;BR /&gt;
10-23-2018 11:01:40.803 -0700 ERROR TcpInputProc - Error encountered for connection from src=128.200.xx.xxx:43946. Local side shutting down&lt;BR /&gt;
10-23-2018 11:02:10.822 -0700 INFO ClientSessionsManager - Adding client: ip=128.200.xx.xxx uts=sunos-sparc id=118563b1987538f45848645415b48e19 name=ECB32768-B062-47DC-B652-34D79B6B2B45&lt;BR /&gt;
10-23-2018 11:02:10.822 -0700 INFO ClientSessionsManager - ip=128.200.xx.xxx name=ECB32768-B062-47DC-B652-34D79B6B2B45 New record for sc=OIT_SC_GLOBAL_CONF-FILES app=OIT_DA_GLOBAL_CONF-FILES: action=Phonehome result=Ok checksum=0&lt;BR /&gt;
10-23-2018 11:02:10.822 -0700 INFO ClientSessionsManager - ip=128.200.xx.xxx name=ECB32768-B062-47DC-B652-34D79B6B2B45 New record for sc=OIT_NSP_INDEX_01 app=OIT_NSP_INDEX_DHCP_01: action=Phonehome result=Ok checksum=0&lt;BR /&gt;
10-23-2018 11:02:10.822 -0700 INFO ClientSessionsManager - ip=128.200.xx.xxx name=ECB32768-B062-47DC-B652-34D79B6B2B45 New record for sc=OIT_NSP_INDEX_01 app=OIT_NSP_INDEX_IDM_01: action=Phonehome result=Ok checksum=0&lt;BR /&gt;
10-23-2018 11:02:10.823 -0700 INFO ClientSessionsManager - ip=128.200.xx.xxx name=ECB32768-B062-47DC-B652-34D79B6B2B45 New record for sc=OIT_NSP_INDEX_01 app=OIT_NSP_INDEX_LDAP_01: action=Phonehome result=Ok checksum=0&lt;/P&gt;</description>
      <pubDate>Tue, 29 Sep 2020 21:47:41 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Need-suggestions-troubleshooting-periodic-dropping-of-logs-from/m-p/441742#M77013</guid>
      <dc:creator>wrangler2x</dc:creator>
      <dc:date>2020-09-29T21:47:41Z</dc:date>
    </item>
    <item>
      <title>Re: Need suggestions troubleshooting periodic dropping of logs from a forwarder. On indexer "Local side shutting down"</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Need-suggestions-troubleshooting-periodic-dropping-of-logs-from/m-p/441743#M77014</link>
      <description>&lt;P&gt;A related thread about &lt;EM&gt;Local side shutting down&lt;/EM&gt; at &lt;A href="https://answers.splunk.com/answers/96860/error-encountered-for-connection-from-src-10-100-100-137-48221-local-side-shutting-down.html"&gt;Error encountered for connection from src=10.100.100.137:48221. Local side shutting down&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 24 Oct 2018 18:17:34 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Need-suggestions-troubleshooting-periodic-dropping-of-logs-from/m-p/441743#M77014</guid>
      <dc:creator>ddrillic</dc:creator>
      <dc:date>2018-10-24T18:17:34Z</dc:date>
    </item>
    <item>
      <title>Re: Need suggestions troubleshooting periodic dropping of logs from a forwarder. On indexer "Local side shutting down"</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Need-suggestions-troubleshooting-periodic-dropping-of-logs-from/m-p/441744#M77015</link>
      <description>&lt;P&gt;I checked the file handles thing after reading this and that all looks okay. Thanks for the suggestion.&lt;/P&gt;

&lt;P&gt;Here is another interesting one (like from that one): &lt;A href="https://answers.splunk.com/answers/43285/error-tcpinputproc-error-encountered-for-connection-from-timeout.html"&gt;https://answers.splunk.com/answers/43285/error-tcpinputproc-error-encountered-for-connection-from-timeout.html&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 24 Oct 2018 21:37:01 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Need-suggestions-troubleshooting-periodic-dropping-of-logs-from/m-p/441744#M77015</guid>
      <dc:creator>wrangler2x</dc:creator>
      <dc:date>2018-10-24T21:37:01Z</dc:date>
    </item>
    <item>
      <title>Re: Need suggestions troubleshooting periodic dropping of logs from a forwarder. On indexer "Local side shutting down"</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Need-suggestions-troubleshooting-periodic-dropping-of-logs-from/m-p/441745#M77016</link>
      <description>&lt;P&gt;I was rummaging around Splunk answers looking for information about the heartbeat function that forwarders use. in the Splunk Answers topic &lt;A href="https://answers.splunk.com/answers/317310/what-does-the-heartbeatfrequency-setting-do-in-out.html"&gt;What does the heartbeatFrequency setting do in outputs.conf?&lt;/A&gt; rphillips_splunk states:&lt;/P&gt;

&lt;P&gt;&lt;EM&gt;heartbeatFrequency =&lt;BR /&gt;
1. How often (in seconds) to send a heartbeat packet to the receiving server.&lt;BR /&gt;
2. Heartbeats are only sent if sendCookedData=true.&lt;BR /&gt;
3. Defaults to 30 (seconds).&lt;/EM&gt;&lt;/P&gt;

&lt;P&gt;&lt;EM&gt;Heartbeat is a mechanism for the forwarder to know that the receiver (i.e. indexer) is alive. If the indexer does not send a return packet to the forwarder, the forwarder will declare this receiver unreachable and not forward data to it. By default a packet is sent every 30s.&lt;/EM&gt;&lt;/P&gt;

&lt;P&gt;I have been under the impression that the indexer does not need to communicate with forwarders except on port 8089 to send Deployment App bundles. I'm wondering what port the indexer uses to talk to the forwarder to convey this return packet rphillips_splunk mentions.&lt;/P&gt;

&lt;P&gt;sendCookedData is a parameter in the tcpout stanza in outputs.conf. It is by default true, so it is set for all newly installed forwarders by default.  I'm wondering what the effect would be on the forwarder if a firewall rule was blocking the indexer from getting the return packet to the forwarder after receiving the heartbeat transmission.&lt;/P&gt;</description>
      <pubDate>Thu, 25 Oct 2018 16:16:54 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Need-suggestions-troubleshooting-periodic-dropping-of-logs-from/m-p/441745#M77016</guid>
      <dc:creator>wrangler2x</dc:creator>
      <dc:date>2018-10-25T16:16:54Z</dc:date>
    </item>
  </channel>
</rss>

