<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Splunk offline command has been running for days on several indexers. in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/Splunk-offline-command-has-been-running-for-days-on-several/m-p/467263#M80533</link>
    <description>&lt;P&gt;I have a similar situation as the question &lt;A href="https://answers.splunk.com/answers/518111/splunk-offline-command-running-for-hours.html" target="_blank"&gt;"Splunk Offline command - running for hours"&lt;/A&gt; however in my case I have several indexers which have been running the &lt;STRONG&gt;offline --enforce-counts&lt;/STRONG&gt; command for days. One was started last Friday so it's been a week for it.&lt;/P&gt;
&lt;P&gt;When I check &lt;CODE&gt;splunkd.log&lt;/CODE&gt; I can still see it copying buckets. For example,&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;05-29-2020 14:02:01.562 +0000 INFO  DatabaseDirectoryManager - idx=main Writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/main/db', pendingBucketUpdates=1 .  Reason='Updating manifest: bucketUpdates=1'
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;There are also a huge number of entries like this:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;05-29-2020 14:45:05.923 +0000 WARN  AdminHandler:AuthenticationHandler - Denied session token for user: splunk-system-user  (In splunkd.log 1911 entries for 1st host, 1256 entries for 2nd host, 1277 for 3rd host that has been running for a week, 1226 entries for 4th host)

05-29-2020 14:45:53.476 +0000 ERROR SearchProcessRunner - launcher_thread=0 runSearch exception: PreforkedSearchProcessException: can't create preforked search process: Cannot send after transport endpoint shutdown ( In splunkd.log 19962 entries for 1st host, 20273 entries for 2nd host, 1829 for 3rd host that has been running for a week, 19101 entries for 4th host)
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;And on the one where it's been running for a week:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;05-29-2020 14:43:33.464 +0000 WARN  DistBundleRestHandler - Failed to find data processor for endpoint=full-bundle
05-29-2020 14:44:26.520 +0000 WARN  ReplicatedDataProcessorManager - Failed to find processor with key=delta-bundle since no such entry exists.
05-29-2020 14:44:26.520 +0000 WARN  BundleDeltaHandler - Failed to find data processor for endpoint=delta-bundle   (3092 total entries for both in splunkd.log)
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;I see in the master Indexer Clustering dashboard that they are still decommissioning (although I don't know what the &lt;STRONG&gt;Buckets entry&lt;/STRONG&gt; indicates. The number of buckets left to replicate?)&lt;/P&gt;
&lt;P&gt;All the indexers are running version 8.0.1 with the exception of a handful in the cluster that are not being decommissioned that have been upgraded to 8.0.3. The Master indexer is still 8.0.1&lt;/P&gt;
&lt;P&gt;What do I do to speed this up? There was no solution posted in the other question.&lt;/P&gt;</description>
    <pubDate>Sun, 07 Jun 2020 01:03:28 GMT</pubDate>
    <dc:creator>scottj1y</dc:creator>
    <dc:date>2020-06-07T01:03:28Z</dc:date>
    <item>
      <title>Splunk offline command has been running for days on several indexers.</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Splunk-offline-command-has-been-running-for-days-on-several/m-p/467263#M80533</link>
      <description>&lt;P&gt;I have a similar situation as the question &lt;A href="https://answers.splunk.com/answers/518111/splunk-offline-command-running-for-hours.html" target="_blank"&gt;"Splunk Offline command - running for hours"&lt;/A&gt; however in my case I have several indexers which have been running the &lt;STRONG&gt;offline --enforce-counts&lt;/STRONG&gt; command for days. One was started last Friday so it's been a week for it.&lt;/P&gt;
&lt;P&gt;When I check &lt;CODE&gt;splunkd.log&lt;/CODE&gt; I can still see it copying buckets. For example,&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;05-29-2020 14:02:01.562 +0000 INFO  DatabaseDirectoryManager - idx=main Writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/main/db', pendingBucketUpdates=1 .  Reason='Updating manifest: bucketUpdates=1'
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;There are also a huge number of entries like this:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;05-29-2020 14:45:05.923 +0000 WARN  AdminHandler:AuthenticationHandler - Denied session token for user: splunk-system-user  (In splunkd.log 1911 entries for 1st host, 1256 entries for 2nd host, 1277 for 3rd host that has been running for a week, 1226 entries for 4th host)

05-29-2020 14:45:53.476 +0000 ERROR SearchProcessRunner - launcher_thread=0 runSearch exception: PreforkedSearchProcessException: can't create preforked search process: Cannot send after transport endpoint shutdown ( In splunkd.log 19962 entries for 1st host, 20273 entries for 2nd host, 1829 for 3rd host that has been running for a week, 19101 entries for 4th host)
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;And on the one where it's been running for a week:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;05-29-2020 14:43:33.464 +0000 WARN  DistBundleRestHandler - Failed to find data processor for endpoint=full-bundle
05-29-2020 14:44:26.520 +0000 WARN  ReplicatedDataProcessorManager - Failed to find processor with key=delta-bundle since no such entry exists.
05-29-2020 14:44:26.520 +0000 WARN  BundleDeltaHandler - Failed to find data processor for endpoint=delta-bundle   (3092 total entries for both in splunkd.log)
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;I see in the master Indexer Clustering dashboard that they are still decommissioning (although I don't know what the &lt;STRONG&gt;Buckets entry&lt;/STRONG&gt; indicates. The number of buckets left to replicate?)&lt;/P&gt;
&lt;P&gt;All the indexers are running version 8.0.1 with the exception of a handful in the cluster that are not being decommissioned that have been upgraded to 8.0.3. The Master indexer is still 8.0.1&lt;/P&gt;
&lt;P&gt;What do I do to speed this up? There was no solution posted in the other question.&lt;/P&gt;</description>
      <pubDate>Sun, 07 Jun 2020 01:03:28 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Splunk-offline-command-has-been-running-for-days-on-several/m-p/467263#M80533</guid>
      <dc:creator>scottj1y</dc:creator>
      <dc:date>2020-06-07T01:03:28Z</dc:date>
    </item>
  </channel>
</rss>

