Getting Data In

Why do our Heavy Forwarders randomly shut down one of its parallel pipelines?

mortenklow
Explorer

Hi,

We've recently tested out a new path for data to flow into our Splunk environment from Universal Forwarders.
We have rolled this out to a small porsion of our UFs for now, and we've been running into an issue a few times already, that I can't figure out.

Data flow is as follows:
UFs sending logs with [httpout] -> Load Balancer -> A set of Heavy Forwarders receiving data via [http://..], and forwarding data with [tcpout] -> Index cluster receiving data with [tcp://..].

The heavy forwarders are there to do some filtering, and also routing of specific data to other Splunk environments. The Heavy Forwarders are also configured with parallelIngestionPipelines=2.
I can also mention that all parts of this environment is running on Windows.

The issue:
A few times I've suddenly had 100% queue on all the four data queues(parsing/aggregation/typing/indexing) on one of the Heavy Forwarders and only on one of its pipelines, but not the other. It turns out after looking in splunkd.log, that it seems like one pipeline has simply shut down (I didn't know this was even possible).

At first, splunkd.log is filled with the usual over and over:

 

 

 

 

07-01-2022 12:35:55.724 +0200 INFO  TailReader [2092 tailreader1] - Batch input finished reading file='D:\Splunk\var\spool\splunk\tracker.log'
07-01-2022 12:35:59.129 +0200 INFO  AutoLoadBalancedConnectionStrategy [4860 TcpOutEloop] - Connected to idx=X.X.X.X:9997:0, pset=1, reuse=0. autoBatch=1
07-01-2022 12:35:59.602 +0200 INFO  AutoLoadBalancedConnectionStrategy [7160 TcpOutEloop] - Connected to idx=Y.Y.Y.Y:9997:0, pset=0, reuse=0. autoBatch=1

 

 

 

 


Until it suddenly logs this one or a few times:

 

 

 

 

07-01-2022 12:36:15.628 +0200 WARN  TcpOutputProc [4360 indexerPipe_1] - Pipeline data does not have indexKey. [_path] = C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe\n[_raw] = \n[_meta] = punct::\n[_stmid] = FSDNhUnXitGgjKJ.C\n[MetaData:Source] = source::WinEventLog\n[MetaData:Host] = host::HOSTNAME-1\n[MetaData:Sourcetype] = sourcetype::WinEventLog\n[_done] = _done\n[_linebreaker] = _linebreaker\n[_conf] = source::WinEventLog|host::HOSTNAME-1|WinEventLog|\n
07-01-2022 12:36:18.966 +0200 WARN  TcpOutputProc [4360 indexerPipe_1] - Pipeline data does not have indexKey. [_path] = C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe\n[_raw] = \n[_meta] = punct::\n[_stmid] = WpC6rhVY9yDblPB.C\n[MetaData:Source] = source::WinEventLog\n[MetaData:Host] = host::HOSTNAME-2\n[MetaData:Sourcetype] = sourcetype::WinEventLog\n[_done] = _done\n[_linebreaker] = _linebreaker\n[_conf] = source::WinEventLog|host::HOSTNAME-2|WinEventLog|\n

 

 

 

 


Then in the same millisecond this happens:

 

 

 

 

07-01-2022 12:36:18.966 +0200 INFO  AutoLoadBalancedConnectionStrategy [4860 TcpOutEloop] - Shutting down auto load balanced connection strategy
07-01-2022 12:36:18.966 +0200 INFO  AutoLoadBalancedConnectionStrategy [4860 TcpOutEloop] - Auto load balanced connection strategy shutdown finished
07-01-2022 12:36:18.966 +0200 INFO  TcpOutputProc [4360 indexerPipe_1] - Waiting for TcpOutputGroups to shutdown
07-01-2022 12:36:18.966 +0200 INFO  AutoLoadBalancedConnectionStrategy [4860 TcpOutEloop] - Shutting down auto load balanced connection strategy
07-01-2022 12:36:18.966 +0200 INFO  AutoLoadBalancedConnectionStrategy [4860 TcpOutEloop] - Auto load balanced connection strategy shutdown finished
07-01-2022 12:36:19.968 +0200 INFO  TcpOutputProc [4360 indexerPipe_1] - Received shutdown control key.
07-01-2022 12:36:19.968 +0200 INFO  IndexProcessor [4360 indexerPipe_1] - shutting down: start
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_metrics Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_audit Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_configtracker Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_internal Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_thefishbucket Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_introspection Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_metrics_rollup Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_telemetry Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=history Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=main Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=summary Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexProcessor [4360 indexerPipe_1] - request state change from=RUN to=SHUTDOWN_IN_PROGRESS
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_audit Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_configtracker Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_internal Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_introspection Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_metrics Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_metrics_rollup Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_telemetry Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_thefishbucket Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=history Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=main Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=summary Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_metrics
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_audit
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_configtracker
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_internal
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_thefishbucket
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_introspection
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_metrics_rollup
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_telemetry
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=history
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=main
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=summary
07-01-2022 12:36:19.968 +0200 INFO  IndexProcessor [4360 indexerPipe_1] - request state change from=SHUTDOWN_IN_PROGRESS to=SHUTDOWN_COMPLETE
07-01-2022 12:36:19.968 +0200 INFO  IndexProcessor [4360 indexerPipe_1] - shutting down: end

 

 

 

 


So it seems like pipeline-1 shut down, and pipeline-0 continues to work(this over and over, with only pset=0):

 

 

 

 

07-01-2022 12:36:29.527 +0200 INFO  AutoLoadBalancedConnectionStrategy [7160 TcpOutEloop] - Connected to idx=X.X.X.X:9997:0, pset=0, reuse=0. autoBatch=1

 

 

 

 


Pipeline-1 seems to still receive data somehow, as I can see queues for that one pipeline is constantly 100% in metrics.log, and we're also loosing logs to our indexers.
And then this starts appearing in splunkd.log over and over until I restart splunk:

 

 

 

 

07-01-2022 12:36:30.522 +0200 ERROR PipelineComponent [8012 CallbackRunnerThread] - Monotonic time source didn't increase; is it stuck?
07-01-2022 12:36:32.525 +0200 ERROR PipelineComponent [8012 CallbackRunnerThread] - Monotonic time source didn't increase; is it stuck?
07-01-2022 12:36:34.528 +0200 ERROR PipelineComponent [8012 CallbackRunnerThread] - Monotonic time source didn't increase; is it stuck?
07-01-2022 12:36:36.532 +0200 ERROR PipelineComponent [8012 CallbackRunnerThread] - Monotonic time source didn't increase; is it stuck?

 

 

 

 

 

I've had this happen to me at least 3-4 times the last week. The heavy forwarders are running Splunk 9.0.0, but I also had this happen once a few weeks ago when I ran 8.2.4. I didn't think too much of it at the time, as I was in the process of replacing them with new servers anyway, and with 9.0.0. But the logs and the symptoms are exactly the same on 9.0.0 as they were that one time at 8.2.4.

I tried looking into the "Pipeline data does not have indexKey"-part, but as this is coming from UFs with identical config where all input stanzas has defined an index, I couldn't find anything wrong there.
That said, even if I were missing index for some log sources, the Heavy Forwarders should not just randomly shut down a pipeline anyway.

Has anyone seen this happen before, or have any suggestion to what I should try or check.
It really seems like a bug to me, but I of course don't know that. 


Best Regards
Morten

Labels (3)
0 Karma

hrawat_splunk
Splunk Employee
Splunk Employee
Received shutdown control key.

This log is an indication of splunkd service getting shutdown signal.
But if you see no other processors shutting down around that time, that's definitely unexpected. If you have time, can you find if any other UF was shutting down few seconds before above log was seen on HF?

0 Karma

hrawat_splunk
Splunk Employee
Splunk Employee

You will have to check for any _reload in splunkd_access.log just before

07-01-2022 12:36:18.966

 
Without a reload or restart not possible for thread to shutdown.

Check logs before indexkey not found log.

0 Karma

mortenklow
Explorer

I had this happen again yesterday, and there is no mention of _reload anywhere in splunkd_access.log before the timestamp where one of the pipelines shut down.
This time I didn't have the indexkey not found at the same timestamp as the shutdown, it was logged about 20 seconds before. But again, this indexkey not found warning is something that is logged every now and then without any pipelines shutting down also, so it might be two different issues?

I had the shutdown of one pipeline start at 14:21:19.265. Not pasting in the whole thing here, as it's exactly the same log events as before. This is how it started, X.X.X.X would be one of my indexers, and I have four pipelines now, that's why you see four of those connections (I'm trying around with different settings):

 

07-12-2022 14:20:48.157 +0200 WARN  TcpOutputProc [1744 indexerPipe_1] - Pipeline data does not have indexKey. [_path] = C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe\n[_raw] = \n[_meta] = punct::\n[_stmid] = Hb4RKZfh8VK1AFB.C\n[MetaData:Source] = source::WinEventLog\n[MetaData:Host] = host::HOSTNAME-1\n[MetaData:Sourcetype] = sourcetype::WinEventLog\n[_done] = _done\n[_linebreaker] = _linebreaker\n[_conf] = source::WinEventLog|host::HOSTNAME-1|WinEventLog|\n
07-12-2022 14:21:02.533 +0200 INFO  ExecProcessor [2208 ExecProcessorSchedulerThread] - message from "D:\Splunk\bin\Python3.exe D:\Splunk\etc\apps\splunk_assist\bin\instance_id_modular_input.py" [assist::instance_id_modular_input.py:228] [get_server_roles] [8064] Fetched server roles, roles=['deployment_client', 'search_peer', 'kv_store']
07-12-2022 14:21:02.533 +0200 INFO  ExecProcessor [2208 ExecProcessorSchedulerThread] - message from "D:\Splunk\bin\Python3.exe D:\Splunk\etc\apps\splunk_assist\bin\instance_id_modular_input.py" [assist::instance_id_modular_input.py:256] [get_cluster_mode] [8064] Fetched cluster mode, mode=disabled
07-12-2022 14:21:02.533 +0200 INFO  ExecProcessor [2208 ExecProcessorSchedulerThread] - message from "D:\Splunk\bin\Python3.exe D:\Splunk\etc\apps\splunk_assist\bin\instance_id_modular_input.py" [assist::instance_id_modular_input.py:30] [should_run] [8064] should run test, sh=False
07-12-2022 14:21:07.750 +0200 INFO  TailReader [6328 tailreader1] - Batch input finished reading file='D:\Splunk\var\spool\splunk\tracker.log'
07-12-2022 14:21:15.004 +0200 INFO  AutoLoadBalancedConnectionStrategy [5784 TcpOutEloop] - Found currently active indexer. Connected to idx=X.X.X.X:9998:0, reuse=1.
07-12-2022 14:21:15.074 +0200 INFO  AutoLoadBalancedConnectionStrategy [7316 TcpOutEloop] - Found currently active indexer. Connected to idx=X.X.X.X:9998:0, reuse=1.
07-12-2022 14:21:16.073 +0200 INFO  AutoLoadBalancedConnectionStrategy [5668 TcpOutEloop] - Found currently active indexer. Connected to idx=X.X.X.X:9998:0, reuse=1.
07-12-2022 14:21:16.198 +0200 INFO  AutoLoadBalancedConnectionStrategy [4084 TcpOutEloop] - Found currently active indexer. Connected to idx=X.X.X.X:9998:0, reuse=1.
07-12-2022 14:21:17.828 +0200 INFO  ExecProcessor [2208 ExecProcessorSchedulerThread] - message from "D:\Splunk\bin\Python3.exe D:\Splunk\etc\apps\splunk_assist\bin\instance_id_modular_input.py" [assist::instance_id_modular_input.py:228] [get_server_roles] [4608] Fetched server roles, roles=['deployment_client', 'search_peer', 'kv_store']
07-12-2022 14:21:17.828 +0200 INFO  ExecProcessor [2208 ExecProcessorSchedulerThread] - message from "D:\Splunk\bin\Python3.exe D:\Splunk\etc\apps\splunk_assist\bin\instance_id_modular_input.py" [assist::instance_id_modular_input.py:256] [get_cluster_mode] [4608] Fetched cluster mode, mode=disabled
07-12-2022 14:21:19.264 +0200 INFO  AutoLoadBalancedConnectionStrategy [5784 TcpOutEloop] - Shutting down auto load balanced connection strategy
07-12-2022 14:21:19.264 +0200 INFO  AutoLoadBalancedConnectionStrategy [5784 TcpOutEloop] - Auto load balanced connection strategy shutdown finished
07-12-2022 14:21:19.264 +0200 INFO  AutoLoadBalancedConnectionStrategy [5784 TcpOutEloop] - Shutting down auto load balanced connection strategy
07-12-2022 14:21:19.265 +0200 INFO  AutoLoadBalancedConnectionStrategy [5784 TcpOutEloop] - Auto load balanced connection strategy shutdown finished
07-12-2022 14:21:19.265 +0200 INFO  TcpOutputProc [6160 indexerPipe_0] - Received shutdown control key.
07-12-2022 14:21:19.265 +0200 INFO  IndexProcessor [6160 indexerPipe_0] - shutting down: start
.....
Rest of the shut down of pipeline 0, identical as posted before

 


Here is what's in splunkd_access.log around the same time. I have masked a few thing, 1.2.3.4 would be my monitoring console for instance:

 

127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:02.542 +0200] "GET //services/server/roles?output_mode=json HTTP/1.0" 200 787 "-" "Python-httplib2/0.13.1 (gzip)" - - - 1ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:02.569 +0200] "GET //services/cluster/config?output_mode=json HTTP/1.0" 200 2610 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:17.739 +0200] "GET /services/properties/telemetry/general/sendSupportUsage HTTP/1.0" 200 5 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:17.751 +0200] "GET //services/server/roles?output_mode=json HTTP/1.0" 200 787 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:17.763 +0200] "GET //services/cluster/config?output_mode=json HTTP/1.0" 200 2610 "-" "Python-httplib2/0.13.1 (gzip)" - - - 1ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:17.787 +0200] "GET //services/server/roles?output_mode=json HTTP/1.0" 200 787 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:17.799 +0200] "GET //services/cluster/config?output_mode=json HTTP/1.0" 200 2610 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:17.809 +0200] "GET /services/properties/telemetry/general/sendSupportUsage HTTP/1.0" 200 5 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:17.821 +0200] "GET //services/server/roles?output_mode=json HTTP/1.0" 200 787 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:17.833 +0200] "GET //services/cluster/config?output_mode=json HTTP/1.0" 200 2610 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:18.079 +0200] "GET //services/server/roles?output_mode=json HTTP/1.0" 200 787 "-" "Python-httplib2/0.13.1 (gzip)" - - - 1ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:18.091 +0200] "GET //services/cluster/config?output_mode=json HTTP/1.0" 200 2610 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:18.103 +0200] "GET //servicesNS/nobody/splunk_secure_gateway/storage/collections/data/meta/soc2_opt_in HTTP/1.0" 404 140 "-" "Python-httplib2/0.13.1 (gzip)" - - - 2ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:18.116 +0200] "GET //services/server/info?output_mode=json HTTP/1.0" 200 2074 "-" "Python-httplib2/0.13.1 (gzip)" - - - 1ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:18.912 +0200] "GET /services/server/info HTTP/1.1" 200 5935 "-" "curl" - - - 1ms
1.2.3.4 - - [12/Jul/2022:14:21:23.318 +0200] "POST /services/admin/auth-tokens HTTP/1.1" 201 871 "-" "Splunk/9.0.0 (Windows Server 10 Standard Edition; arch=x64)" - - - 1ms
1.2.3.4 - splunk-system-user [12/Jul/2022:14:21:23.323 +0200] "GET /services/server/info HTTP/1.1" 200 1477 "-" "Splunk/9.0.0 (Windows Server 10 Standard Edition; arch=x64)" - - - 1ms
1.2.3.4 - - [12/Jul/2022:14:21:23.344 +0200] "POST /services/admin/auth-tokens HTTP/1.1" 201 871 "-" "Splunk/9.0.0 (Windows Server 10 Standard Edition; arch=x64)" - - - 2ms
1.2.3.4 - splunk-system-user [12/Jul/2022:14:21:23.355 +0200] "GET /services/admin/bundles/SplunkMonitoringConsole?count=-1 HTTP/1.1" 200 1068 "-" "Splunk/9.0.0 (Windows Server 10 Standard Edition; arch=x64)" - - - 1ms
1.2.3.4 - splunk-system-user [12/Jul/2022:14:21:23.363 +0200] "GET /services/data/indexes?mode=minimum&count=0&datatype=all HTTP/1.1" 200 1351 "-" "Splunk/9.0.0 (Windows Server 10 Standard Edition; arch=x64)" - - - 1ms
1.2.3.4 - splunk-system-user [12/Jul/2022:14:21:23.370 +0200] "GET /services/messages?count=1000&search=name%21%3Dremote:* HTTP/1.1" 200 493 "-" "Splunk/9.0.0 (Windows Server 10 Standard Edition; arch=x64)" - - - 1ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:32.501 +0200] "GET /services/properties/telemetry/general/sendSupportUsage HTTP/1.0" 200 5 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:32.513 +0200] "GET //services/server/roles?output_mode=json HTTP/1.0" 200 787 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:32.539 +0200] "GET //services/cluster/config?output_mode=json HTTP/1.0" 200 2610 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:32.544 +0200] "GET //services/server/roles?output_mode=json HTTP/1.0" 200 787 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:32.565 +0200] "GET //services/cluster/config?output_mode=json HTTP/1.0" 200 2610 "-" "Python-httplib2/0.13.1 (gzip)" - - - 1ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:32.600 +0200] "GET /services/properties/telemetry/general/sendSupportUsage HTTP/1.0" 200 5 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:32.611 +0200] "GET //services/server/roles?output_mode=json HTTP/1.0" 200 787 "-" "Python-httplib2/0.13.1 (gzip)" - - - 1ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:32.633 +0200] "GET //services/cluster/config?output_mode=json HTTP/1.0" 200 2610 "-" "Python-httplib2/0.13.1 (gzip)" - - - 1ms

 

 

I went over several of these shut downs, and there is no mention of _reload in splunkd_access.log around or before any of those occurrences.
I guess even if it was, it should not just reload itself and/or shut down, but I guess it would have been a clue as to what is going on.

0 Karma

mortenklow
Explorer

I have two heavy forwarders running behind load balancer, and I currently have four parallel pipelines on each.
I just now had three pipelines shut down on one of them and two pipelines shut down on the other one, within minutes of each other, right as people starts logging on to work. So it might be either load dependent, or just in general a higher chance for this to happen with more logs flowing through I guess.

For now I'm going to move the UFs back to the old solution. where they would use tcpout directly to the indexers. Not ideal for our needs, but that would have to work for now.

I should probably open a support ticket about this (after my four weeks of vacation, which starts in a few days), if you or anyone else doesn't have any more idea what to test or check here.

0 Karma

hrawat_splunk
Splunk Employee
Splunk Employee

Whenever you see something like

 Pipeline data does not have indexKey. [_path] = C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe\n[_raw] 

 

The log contains the executable/script name. It simply means that script/exe in the log crashed. 
As of now it's known issue for the exec in question.
https://community.splunk.com/t5/Getting-Data-In/Why-is-splunk-winevtlog-exe-crash-low-thruput-high-c...

Regarding blocked queues, did you restart splunk ? 

request state change from=SHUTDOWN_IN_PROGRESS to=SHUTDOWN_COMPLETE

 Or TcpOut reloaded itself?
One of the reason for TcpOut reload is, if outputs.conf get's modified either by DS or call to tcpout rest endpoint.

Did you see any log "WARN DC:DeploymentClient - Restarting Splunkd" ?

mortenklow
Explorer

Hi,

Thank you for the reply!

Glad to know there is a known issue with splunk-winevtlog.exe that will be fixed in 9.0.1.
I assume this bug also affects Universal Forwarders, as it's pretty much the same executable?
We've had UFs stop sending windows event log many times over the last year, and I've noticed splunk-winevtlog.exe had stopped running, but I never got around to report it as it was so sporadic.


Regarding the blocked queue messages and the shutdown message, I did eventually restart splunk on the heavy forwarder in question, but that shutdown message you quoted came long before that.
There are no entries matching 'DeploymentClient' at all in splunkd.log leading up to this.

What happened is, right after the "pipeline data does not have indexKey" entry in the log, on the exact same timestamp, one of the pipelines(in lack of a better way of explaining it) shut down immediately.

I realized I split up the different parts of the log in my previous post, so I've included below the log in its entirety from start of the error to shutdown complete.

Splunk continues to run, and the other pipeline continues to forward data as expected, but the "crashed pipeline" got 100% on all four queues and were not doing anything from what I could tell, but I don't know how to verify that.
I didn't always have two pipelines by the way, and when I ran with just one pipeline, the same thing happened and the same log messages were logged. Splunk continued to run, but then the single and only pipeline had 100% queues.

07-01-2022 12:36:18.966 +0200 WARN  TcpOutputProc [4360 indexerPipe_1] - Pipeline data does not have indexKey. [_path] = C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe\n[_raw] = \n[_meta] = punct::\n[_stmid] = WpC6rhVY9yDblPB.C\n[MetaData:Source] = source::WinEventLog\n[MetaData:Host] = host::HOSTNAME-1\n[MetaData:Sourcetype] = sourcetype::WinEventLog\n[_done] = _done\n[_linebreaker] = _linebreaker\n[_conf] = source::WinEventLog|host::HOSTNAME-1|WinEventLog|\n
07-01-2022 12:36:18.966 +0200 INFO  AutoLoadBalancedConnectionStrategy [4860 TcpOutEloop] - Shutting down auto load balanced connection strategy
07-01-2022 12:36:18.966 +0200 INFO  AutoLoadBalancedConnectionStrategy [4860 TcpOutEloop] - Auto load balanced connection strategy shutdown finished
07-01-2022 12:36:18.966 +0200 INFO  TcpOutputProc [4360 indexerPipe_1] - Waiting for TcpOutputGroups to shutdown
07-01-2022 12:36:18.966 +0200 INFO  AutoLoadBalancedConnectionStrategy [4860 TcpOutEloop] - Shutting down auto load balanced connection strategy
07-01-2022 12:36:18.966 +0200 INFO  AutoLoadBalancedConnectionStrategy [4860 TcpOutEloop] - Auto load balanced connection strategy shutdown finished
07-01-2022 12:36:19.968 +0200 INFO  TcpOutputProc [4360 indexerPipe_1] - Received shutdown control key.
07-01-2022 12:36:19.968 +0200 INFO  IndexProcessor [4360 indexerPipe_1] - shutting down: start
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_metrics Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_audit Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_configtracker Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_internal Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_thefishbucket Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_introspection Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_metrics_rollup Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_telemetry Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=history Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=main Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=summary Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexProcessor [4360 indexerPipe_1] - request state change from=RUN to=SHUTDOWN_IN_PROGRESS
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_audit Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_configtracker Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_internal Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_introspection Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_metrics Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_metrics_rollup Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_telemetry Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_thefishbucket Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=history Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=main Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=summary Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_metrics
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_audit
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_configtracker
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_internal
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_thefishbucket
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_introspection
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_metrics_rollup
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_telemetry
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=history
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=main
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=summary
07-01-2022 12:36:19.968 +0200 INFO  IndexProcessor [4360 indexerPipe_1] - request state change from=SHUTDOWN_IN_PROGRESS to=SHUTDOWN_COMPLETE
07-01-2022 12:36:19.968 +0200 INFO  IndexProcessor [4360 indexerPipe_1] - shutting down: end
07-01-2022 12:36:23.531 +0200 ERROR PipelineComponent [8012 CallbackRunnerThread] - Monotonic time source didn't increase; is it stuck?

 

0 Karma
Get Updates on the Splunk Community!

Splunk Forwarders and Forced Time Based Load Balancing

Splunk customers use universal forwarders to collect and send data to Splunk. A universal forwarder can send ...

NEW! Log Views in Splunk Observability Dashboards Gives Context From a Single Page

Today, Splunk Observability releases log views, a new feature for users to add their logs data from Splunk Log ...

Last Chance to Submit Your Paper For BSides Splunk - Deadline is August 12th!

Hello everyone! Don't wait to submit - The deadline is August 12th! We have truly missed the community so ...