Getting Data In

Splunk offline command has been running for days on several indexers.

scottj1y
Path Finder

I have a similar situation as the question "Splunk Offline command - running for hours" however in my case I have several indexers which have been running the offline --enforce-counts command for days. One was started last Friday so it's been a week for it.

When I check splunkd.log I can still see it copying buckets. For example,

05-29-2020 14:02:01.562 +0000 INFO  DatabaseDirectoryManager - idx=main Writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/main/db', pendingBucketUpdates=1 .  Reason='Updating manifest: bucketUpdates=1'

There are also a huge number of entries like this:

05-29-2020 14:45:05.923 +0000 WARN  AdminHandler:AuthenticationHandler - Denied session token for user: splunk-system-user  (In splunkd.log 1911 entries for 1st host, 1256 entries for 2nd host, 1277 for 3rd host that has been running for a week, 1226 entries for 4th host)

05-29-2020 14:45:53.476 +0000 ERROR SearchProcessRunner - launcher_thread=0 runSearch exception: PreforkedSearchProcessException: can't create preforked search process: Cannot send after transport endpoint shutdown ( In splunkd.log 19962 entries for 1st host, 20273 entries for 2nd host, 1829 for 3rd host that has been running for a week, 19101 entries for 4th host)

And on the one where it's been running for a week:

05-29-2020 14:43:33.464 +0000 WARN  DistBundleRestHandler - Failed to find data processor for endpoint=full-bundle
05-29-2020 14:44:26.520 +0000 WARN  ReplicatedDataProcessorManager - Failed to find processor with key=delta-bundle since no such entry exists.
05-29-2020 14:44:26.520 +0000 WARN  BundleDeltaHandler - Failed to find data processor for endpoint=delta-bundle   (3092 total entries for both in splunkd.log)

I see in the master Indexer Clustering dashboard that they are still decommissioning (although I don't know what the Buckets entry indicates. The number of buckets left to replicate?)

All the indexers are running version 8.0.1 with the exception of a handful in the cluster that are not being decommissioned that have been upgraded to 8.0.3. The Master indexer is still 8.0.1

What do I do to speed this up? There was no solution posted in the other question.

Labels (1)
0 Karma
Get Updates on the Splunk Community!

Advanced Splunk Data Management Strategies

Join us on Wednesday, May 14, 2025, at 11 AM PDT / 2 PM EDT for an exclusive Tech Talk that delves into ...

Uncovering Multi-Account Fraud with Splunk Banking Analytics

Last month, I met with a Senior Fraud Analyst at a nationally recognized bank to discuss their recent success ...

Secure Your Future: A Deep Dive into the Compliance and Security Enhancements for the ...

What has been announced?  In the blog, “Preparing your Splunk Environment for OpensSSL3,”we announced the ...