Security

Indexer cluster - splunk offline taking excessive time

mike_k
Path Finder

I am in the process of doing some maintenance on my indexer cluster (1 cluster master, 2 peer indexers).

I  put my cluster master in maintenance-mode then went onto the first indexer and placed it in offline mode. Splunk responded on the indexer by indicating "reassigning primaries ... this may take a few minutes". This process however appears to be taking quite a while (currently 3 hours).

If i look at the splunkd log running on this indexer, i see the following:

  • A couple of the following error at the start:
    • ERROR BucketMover - aborting move because could not remove existing ....... (reason='Access is denied.')
  • At around the time i put the indexer offline the above ERROR messages stopped and i can see the indexer shutting down and finishing with:
    • INFO HotDBManager - closing hot mgr for idx=main
    • INFO IndexProcessor - request state change from=SHUTDOWN_IN_PROGRESS to=SHUTDOWN_COMPLETE
    • IndexProcessor - shutting down: end
    • INFO ShutdownHandler - shutting down level "ShutdownLevel_LastIndexerLevel"
    • INFO ShutdownHandler - shutting down level "ShutdownLevel_AWSMetering"
    • INFO ShutdownHandler - shutting down level "ShutdownLevel_TcpInput2"
    • INFO ShutdownHandler - shutting down level "ShutdownLevel_RemoteQueueInputWorker"
    • INFO ShutdownHandler - shutting down level "ShutdownLevel_RemoteQueue"
    • INFO ShutdownHandler - shutting down level "ShutdownLevel_RemoteFileSystem"
    • INFO ShutdownHandler - shutting down level "ShutdownLevel_LocalFileSystem"
    • INFO ShutdownHandler - shutting down level "ShutdownLevel_SearchDispatch"
    • INFO ShutdownHandler - shutting down level "ShutdownLevel_LoadLDAPUsers"
    • INFO ShutdownHandler - shutting down level "ShutdownLevel_MetricsManager"
    • INFO ShutdownHandler - shutting down level "ShutdownLevel_Pipeline"
    • INFO ShutdownHandler - shutting down level "ShutdownLevel_Queue"
    • INFO ShutdownHandler - shutting down level "ShutdownLevel_CallbackRunner"
    • INFO ShutdownHandler - shutting down level "ShutdownLevel_HttpClient"
    • INFO ShutdownHandler - shutting down level "ShutdownLevel_DmcProxyHttpClient"
    • INFO ShutdownHandler - shutting down level "ShutdownLevel_Duo2FAHttpClient"
    • INFO ShutdownHandler - shutting down level "ShutdownLevel_ApplicationLicenseChecker"
    • INFO ShutdownHandler - shutting down level "ShutdownLevel_S3ConnectionPoolManager"
    • INFO ShutdownHandler - shutting down level "ShutdownLevel_TelemetryMetricBuffer"
    • INFO ShutdownHandler - shutting down level "ShutdownLevel_ConfLiveUpdateProcessor"
    • INFO ShutdownHandler - Shutdown complete in 193.3 seconds
    • INFO loader - All pipelines finished.

Note: have only put in what appear to be significant lines at the end of the splunkd.log file.

to my mind this appears as though the Indexer has successfully transitioned to offline, however the cmd prompt still appears to be processing (still printing dots to the screen). would be good to get a second opinion as to whether I terminate the command prompt with the "splunk offline" command running.

Labels (1)
0 Karma

mike_k
Path Finder

Thanks for that reply.

At the end of the day, i figured that the log indicated that it had finished. So i terminated the splunkd process and shutdown/rebooted the server (which I had to do anyway as part of the maintenance activity i was doing).

It appears to have come back up (just looking at it now), joined the cluster. Search and replication factor have gone back to green. It all appears to be working again. but quite weird.

0 Karma

isoutamo
SplunkTrust
SplunkTrust

This is quite interesting if it still print those dots event logs said that it has shutdown already.

The first message "Access denies" could means that there are some files which have wrong ownership. You should do chow -R <splunk>:<splunk> for $SPLUNK_HOME and all data dirs which you portably have somewhere else.

I suppose that you are running it on linux. So do "ps -fe" and check are there still any splunk based processes and who owns those.

r. Ismo

0 Karma
Get Updates on the Splunk Community!

Updated Team Landing Page in Splunk Observability

We’re making some changes to the team landing page in Splunk Observability, based on your feedback. The ...

New! Splunk Observability Search Enhancements for Splunk APM Services/Traces and ...

Regardless of where you are in Splunk Observability, you can search for relevant APM targets including service ...

Webinar Recap | Revolutionizing IT Operations: The Transformative Power of AI and ML ...

The Transformative Power of AI and ML in Enhancing Observability   In the realm of IT operations, the ...