Couple of times the TA-ObserveIT caused Splunk to shut itself down. How can it be?
We see the following in the splunkd.log -
10-02-2020 07:41:27.869 -0400 ERROR ExecProcessor - message from "python /opt/apps/splunk/etc/apps/TA-ObserveIT/bin/observeit_api.py" File "/opt/apps/splunk/etc/apps/TA-ObserveIT/bin/ta_observeit/solnlib/packages/splunklib/binding.py", line 1221, in request
10-02-2020 07:41:27.869 -0400 ERROR ExecProcessor - message from "python /opt/apps/splunk/etc/apps/TA-ObserveIT/bin/observeit_api.py" raise HTTPError(response)
10-02-2020 07:41:27.869 -0400 ERROR ExecProcessor - message from "python /opt/apps/splunk/etc/apps/TA-ObserveIT/bin/observeit_api.py" HTTPError: HTTP 500 Internal Server Error -- {"messages":[{"type":"ERROR","text":"External handler failed with code '-1' and output: ''. See splunkd.log for stderr output."}]}
10-02-2020 07:41:27.903 -0400 ERROR ExecProcessor - message from "python /opt/apps/splunk/etc/apps/TA-ObserveIT/bin/observeit_api.py" ERRORHTTP 500 Internal Server Error -- {"messages":[{"type":"ERROR","text":"External handler failed with code '-1' and output: ''. See splunkd.log for stderr output."}]}
10-02-2020 07:41:27.946 -0400 INFO PipelineComponent - Performing early shutdown tasks
10-02-2020 07:41:27.950 -0400 INFO IndexProcessor - handleSignal : Disabling streaming searches.
10-02-2020 07:41:27.951 -0400 INFO IndexProcessor - request state change from=RUN to=SHUTDOWN_SIGNALED
10-02-2020 07:41:27.951 -0400 INFO IndexProcessor - handleSignal : Disabling streaming searches.
10-02-2020 07:41:27.951 -0400 INFO IndexProcessor - request state change from=RUN to=SHUTDOWN_SIGNALED
10-02-2020 07:41:27.951 -0400 INFO UiHttpListener - Shutting down webui
10-02-2020 07:41:27.961 -0400 INFO UiHttpListener - Shutting down webui completed
10-02-2020 07:41:28.689 -0400 INFO IndexProcessor - ingest_pipe=0: active realtime streams have hit 0 during shutdown
10-02-2020 07:41:28.849 -0400 INFO IndexProcessor - ingest_pipe=1: active realtime streams have hit 0 during shutdown