Getting Data In

Why do our Heavy Forwarders randomly shut down one of its parallel pipelines?

mortenklow
Explorer

Hi,

We've recently tested out a new path for data to flow into our Splunk environment from Universal Forwarders.
We have rolled this out to a small porsion of our UFs for now, and we've been running into an issue a few times already, that I can't figure out.

Data flow is as follows:
UFs sending logs with [httpout] -> Load Balancer -> A set of Heavy Forwarders receiving data via [http://..], and forwarding data with [tcpout] -> Index cluster receiving data with [tcp://..].

The heavy forwarders are there to do some filtering, and also routing of specific data to other Splunk environments. The Heavy Forwarders are also configured with parallelIngestionPipelines=2.
I can also mention that all parts of this environment is running on Windows.

The issue:
A few times I've suddenly had 100% queue on all the four data queues(parsing/aggregation/typing/indexing) on one of the Heavy Forwarders and only on one of its pipelines, but not the other. It turns out after looking in splunkd.log, that it seems like one pipeline has simply shut down (I didn't know this was even possible).

At first, splunkd.log is filled with the usual over and over:

 

 

 

 

07-01-2022 12:35:55.724 +0200 INFO  TailReader [2092 tailreader1] - Batch input finished reading file='D:\Splunk\var\spool\splunk\tracker.log'
07-01-2022 12:35:59.129 +0200 INFO  AutoLoadBalancedConnectionStrategy [4860 TcpOutEloop] - Connected to idx=X.X.X.X:9997:0, pset=1, reuse=0. autoBatch=1
07-01-2022 12:35:59.602 +0200 INFO  AutoLoadBalancedConnectionStrategy [7160 TcpOutEloop] - Connected to idx=Y.Y.Y.Y:9997:0, pset=0, reuse=0. autoBatch=1

 

 

 

 


Until it suddenly logs this one or a few times:

 

 

 

 

07-01-2022 12:36:15.628 +0200 WARN  TcpOutputProc [4360 indexerPipe_1] - Pipeline data does not have indexKey. [_path] = C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe\n[_raw] = \n[_meta] = punct::\n[_stmid] = FSDNhUnXitGgjKJ.C\n[MetaData:Source] = source::WinEventLog\n[MetaData:Host] = host::HOSTNAME-1\n[MetaData:Sourcetype] = sourcetype::WinEventLog\n[_done] = _done\n[_linebreaker] = _linebreaker\n[_conf] = source::WinEventLog|host::HOSTNAME-1|WinEventLog|\n
07-01-2022 12:36:18.966 +0200 WARN  TcpOutputProc [4360 indexerPipe_1] - Pipeline data does not have indexKey. [_path] = C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe\n[_raw] = \n[_meta] = punct::\n[_stmid] = WpC6rhVY9yDblPB.C\n[MetaData:Source] = source::WinEventLog\n[MetaData:Host] = host::HOSTNAME-2\n[MetaData:Sourcetype] = sourcetype::WinEventLog\n[_done] = _done\n[_linebreaker] = _linebreaker\n[_conf] = source::WinEventLog|host::HOSTNAME-2|WinEventLog|\n

 

 

 

 


Then in the same millisecond this happens:

 

 

 

 

07-01-2022 12:36:18.966 +0200 INFO  AutoLoadBalancedConnectionStrategy [4860 TcpOutEloop] - Shutting down auto load balanced connection strategy
07-01-2022 12:36:18.966 +0200 INFO  AutoLoadBalancedConnectionStrategy [4860 TcpOutEloop] - Auto load balanced connection strategy shutdown finished
07-01-2022 12:36:18.966 +0200 INFO  TcpOutputProc [4360 indexerPipe_1] - Waiting for TcpOutputGroups to shutdown
07-01-2022 12:36:18.966 +0200 INFO  AutoLoadBalancedConnectionStrategy [4860 TcpOutEloop] - Shutting down auto load balanced connection strategy
07-01-2022 12:36:18.966 +0200 INFO  AutoLoadBalancedConnectionStrategy [4860 TcpOutEloop] - Auto load balanced connection strategy shutdown finished
07-01-2022 12:36:19.968 +0200 INFO  TcpOutputProc [4360 indexerPipe_1] - Received shutdown control key.
07-01-2022 12:36:19.968 +0200 INFO  IndexProcessor [4360 indexerPipe_1] - shutting down: start
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_metrics Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_audit Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_configtracker Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_internal Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_thefishbucket Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_introspection Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_metrics_rollup Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_telemetry Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=history Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=main Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=summary Sync before shutdown
07-01-2022 12:36:19.968 +0200 INFO  IndexProcessor [4360 indexerPipe_1] - request state change from=RUN to=SHUTDOWN_IN_PROGRESS
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_audit Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_configtracker Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_internal Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_introspection Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_metrics Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_metrics_rollup Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_telemetry Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=_thefishbucket Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=history Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=main Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  IndexWriter [4360 indexerPipe_1] - idx=summary Handling shutdown or signal, reason=1
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_metrics
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_audit
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_configtracker
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_internal
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_thefishbucket
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_introspection
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_metrics_rollup
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=_telemetry
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=history
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=main
07-01-2022 12:36:19.968 +0200 INFO  HotDBManager [4360 indexerPipe_1] - closing hot mgr for idx=summary
07-01-2022 12:36:19.968 +0200 INFO  IndexProcessor [4360 indexerPipe_1] - request state change from=SHUTDOWN_IN_PROGRESS to=SHUTDOWN_COMPLETE
07-01-2022 12:36:19.968 +0200 INFO  IndexProcessor [4360 indexerPipe_1] - shutting down: end

 

 

 

 


So it seems like pipeline-1 shut down, and pipeline-0 continues to work(this over and over, with only pset=0):

 

 

 

 

07-01-2022 12:36:29.527 +0200 INFO  AutoLoadBalancedConnectionStrategy [7160 TcpOutEloop] - Connected to idx=X.X.X.X:9997:0, pset=0, reuse=0. autoBatch=1

 

 

 

 


Pipeline-1 seems to still receive data somehow, as I can see queues for that one pipeline is constantly 100% in metrics.log, and we're also loosing logs to our indexers.
And then this starts appearing in splunkd.log over and over until I restart splunk:

 

 

 

 

07-01-2022 12:36:30.522 +0200 ERROR PipelineComponent [8012 CallbackRunnerThread] - Monotonic time source didn't increase; is it stuck?
07-01-2022 12:36:32.525 +0200 ERROR PipelineComponent [8012 CallbackRunnerThread] - Monotonic time source didn't increase; is it stuck?
07-01-2022 12:36:34.528 +0200 ERROR PipelineComponent [8012 CallbackRunnerThread] - Monotonic time source didn't increase; is it stuck?
07-01-2022 12:36:36.532 +0200 ERROR PipelineComponent [8012 CallbackRunnerThread] - Monotonic time source didn't increase; is it stuck?

 

 

 

 

 

I've had this happen to me at least 3-4 times the last week. The heavy forwarders are running Splunk 9.0.0, but I also had this happen once a few weeks ago when I ran 8.2.4. I didn't think too much of it at the time, as I was in the process of replacing them with new servers anyway, and with 9.0.0. But the logs and the symptoms are exactly the same on 9.0.0 as they were that one time at 8.2.4.

I tried looking into the "Pipeline data does not have indexKey"-part, but as this is coming from UFs with identical config where all input stanzas has defined an index, I couldn't find anything wrong there.
That said, even if I were missing index for some log sources, the Heavy Forwarders should not just randomly shut down a pipeline anyway.

Has anyone seen this happen before, or have any suggestion to what I should try or check.
It really seems like a bug to me, but I of course don't know that. 


Best Regards
Morten

Labels (3)
1 Solution

hrawat_splunk
Splunk Employee
Splunk Employee

The root cause of the problem is  SPL-210081.   Fixed on httpout side. Requires upgrade to Forwarder.

SPL-224974 is re-fixing SPL-210081. Fixed on the server side.  This option is better because

  1.  Don't have to upgrade tons of forwarders. Just upgrade HF(or whichever layer is receiving httpout)
  2.  It will also take care of  all s2s clients( splunk or 3rd party software) .

SPL-229622 is new fix that is likely to make into 9.0.3 is another enhanced version of SPL-224974. But you should be fine with  SPL-210081(UF upgrade) or SPL-224974(HF upgrade).

View solution in original post

hrawat_splunk
Splunk Employee
Splunk Employee
Received shutdown control key.

This log is an indication of splunkd service getting shutdown signal.
But if you see no other processors shutting down around that time, that's definitely unexpected. If you have time, can you find if any other UF was shutting down few seconds before above log was seen on HF?

0 Karma

mortenklow
Explorer

Hi,

Thanks for your reply, and sorry for the delayed feedback, I've been on vacation.

Finding if any UF was shutting down at the same time or right before will pretty much be impossible for us to do, as we manage 25000+ UFs through this solution and a lot of them are client computers which I guess shut down and restart all the time, depending on what users do.
We also had to stop ingesting intenal logs from UFs at some point because of the huge amount of data this produces, and we only really turn on ingestion of internal logs for specific sets of UFs if we need to troubleshoot anything.

I had to roll back this whole setup to our old setup before I went on vacation, to have a working solution while I was gone. So currently this solution is not in use. But this is something we really need to get working properly, so I have to look further into this.

0 Karma

hrawat_splunk
Splunk Employee
Splunk Employee

Do you mind sharing splunkd.log?

0 Karma

mortenklow
Explorer

Do you mean the whole file? I would then want to mask quite a lot in there before posting it here, if I'm not going through a support case.
I did post the "relevant" parts of splunkd.log previously here, but I get how it's better to see the whole picture.

Currently I've rolled back to out old solution, so these heavy forwarders are not really in use and haven't "crashed" in a while.

I guess you would need to look at splunkd.log while also one of these crashes occurs?

Do you think this issue can be relevant to the other thread you posted a link to, that is fixed in 9.0.1? There is also several "indexer crashes" due to time format config issues that is fixed in 9.0.1. I just saw the release drop.
I might as well upgrade to 9.0.1 just because.

0 Karma

hrawat_splunk
Splunk Employee
Splunk Employee

>Do you think this issue can be relevant to the other thread you posted a link to, that is fixed in 9.0.1?
No it's something unrelated to that. Chances are still an issue with 9.0.1.

0 Karma

hrawat_splunk
Splunk Employee
Splunk Employee

>I guess you would need to look at splunkd.log while also one of these crashes occurs?
Right. 
But I would say open a splunk support case attach diag and let me know the case number. 

0 Karma

mortenklow
Explorer

Thanks, I'll put some load on this setup again and open a support case when I have another crash. Will post the case number here.

0 Karma

mortenklow
Explorer

Issue reproduced.

Ticket has been submitted: 3035621

0 Karma

hrawat_splunk
Splunk Employee
Splunk Employee

We are already fixing this issue where a remote node(UF/HF) can pass down shutdown key to next layer. Upcoming set of patches(9.0.2 etc) has the fix.  However with this diag we can verify if it's same issue .

 

 

0 Karma

mortenklow
Explorer

That's great news, thank you!

I've gotten a bunch of questions from support after opening the case, should I continue to answer all of those right away or should I wait for your confirmation of whether this is indeed the issue you're mentioning?

0 Karma

hrawat_splunk
Splunk Employee
Splunk Employee

I will answer depending on how the diag looks. But the information we have seen so far is "Shutdown control" signal received by HF is indicating known issue. But it's always good to see complete raw splunkd.log for confirmation.

 

0 Karma

mortenklow
Explorer

Thanks.

Quick question: Is this a new bug in 9.x, or was this present in 8.x also?
I want to hold off upgrading the rest of our Environment (indexers etc.) if this bug was introduced in 9.x, we're still running 8.x on most Splunk components.

0 Karma

hrawat_splunk
Splunk Employee
Splunk Employee

You will have to check for any _reload in splunkd_access.log just before

07-01-2022 12:36:18.966

 
Without a reload or restart not possible for thread to shutdown.

Check logs before indexkey not found log.

0 Karma

mortenklow
Explorer

I had this happen again yesterday, and there is no mention of _reload anywhere in splunkd_access.log before the timestamp where one of the pipelines shut down.
This time I didn't have the indexkey not found at the same timestamp as the shutdown, it was logged about 20 seconds before. But again, this indexkey not found warning is something that is logged every now and then without any pipelines shutting down also, so it might be two different issues?

I had the shutdown of one pipeline start at 14:21:19.265. Not pasting in the whole thing here, as it's exactly the same log events as before. This is how it started, X.X.X.X would be one of my indexers, and I have four pipelines now, that's why you see four of those connections (I'm trying around with different settings):

 

07-12-2022 14:20:48.157 +0200 WARN  TcpOutputProc [1744 indexerPipe_1] - Pipeline data does not have indexKey. [_path] = C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe\n[_raw] = \n[_meta] = punct::\n[_stmid] = Hb4RKZfh8VK1AFB.C\n[MetaData:Source] = source::WinEventLog\n[MetaData:Host] = host::HOSTNAME-1\n[MetaData:Sourcetype] = sourcetype::WinEventLog\n[_done] = _done\n[_linebreaker] = _linebreaker\n[_conf] = source::WinEventLog|host::HOSTNAME-1|WinEventLog|\n
07-12-2022 14:21:02.533 +0200 INFO  ExecProcessor [2208 ExecProcessorSchedulerThread] - message from "D:\Splunk\bin\Python3.exe D:\Splunk\etc\apps\splunk_assist\bin\instance_id_modular_input.py" [assist::instance_id_modular_input.py:228] [get_server_roles] [8064] Fetched server roles, roles=['deployment_client', 'search_peer', 'kv_store']
07-12-2022 14:21:02.533 +0200 INFO  ExecProcessor [2208 ExecProcessorSchedulerThread] - message from "D:\Splunk\bin\Python3.exe D:\Splunk\etc\apps\splunk_assist\bin\instance_id_modular_input.py" [assist::instance_id_modular_input.py:256] [get_cluster_mode] [8064] Fetched cluster mode, mode=disabled
07-12-2022 14:21:02.533 +0200 INFO  ExecProcessor [2208 ExecProcessorSchedulerThread] - message from "D:\Splunk\bin\Python3.exe D:\Splunk\etc\apps\splunk_assist\bin\instance_id_modular_input.py" [assist::instance_id_modular_input.py:30] [should_run] [8064] should run test, sh=False
07-12-2022 14:21:07.750 +0200 INFO  TailReader [6328 tailreader1] - Batch input finished reading file='D:\Splunk\var\spool\splunk\tracker.log'
07-12-2022 14:21:15.004 +0200 INFO  AutoLoadBalancedConnectionStrategy [5784 TcpOutEloop] - Found currently active indexer. Connected to idx=X.X.X.X:9998:0, reuse=1.
07-12-2022 14:21:15.074 +0200 INFO  AutoLoadBalancedConnectionStrategy [7316 TcpOutEloop] - Found currently active indexer. Connected to idx=X.X.X.X:9998:0, reuse=1.
07-12-2022 14:21:16.073 +0200 INFO  AutoLoadBalancedConnectionStrategy [5668 TcpOutEloop] - Found currently active indexer. Connected to idx=X.X.X.X:9998:0, reuse=1.
07-12-2022 14:21:16.198 +0200 INFO  AutoLoadBalancedConnectionStrategy [4084 TcpOutEloop] - Found currently active indexer. Connected to idx=X.X.X.X:9998:0, reuse=1.
07-12-2022 14:21:17.828 +0200 INFO  ExecProcessor [2208 ExecProcessorSchedulerThread] - message from "D:\Splunk\bin\Python3.exe D:\Splunk\etc\apps\splunk_assist\bin\instance_id_modular_input.py" [assist::instance_id_modular_input.py:228] [get_server_roles] [4608] Fetched server roles, roles=['deployment_client', 'search_peer', 'kv_store']
07-12-2022 14:21:17.828 +0200 INFO  ExecProcessor [2208 ExecProcessorSchedulerThread] - message from "D:\Splunk\bin\Python3.exe D:\Splunk\etc\apps\splunk_assist\bin\instance_id_modular_input.py" [assist::instance_id_modular_input.py:256] [get_cluster_mode] [4608] Fetched cluster mode, mode=disabled
07-12-2022 14:21:19.264 +0200 INFO  AutoLoadBalancedConnectionStrategy [5784 TcpOutEloop] - Shutting down auto load balanced connection strategy
07-12-2022 14:21:19.264 +0200 INFO  AutoLoadBalancedConnectionStrategy [5784 TcpOutEloop] - Auto load balanced connection strategy shutdown finished
07-12-2022 14:21:19.264 +0200 INFO  AutoLoadBalancedConnectionStrategy [5784 TcpOutEloop] - Shutting down auto load balanced connection strategy
07-12-2022 14:21:19.265 +0200 INFO  AutoLoadBalancedConnectionStrategy [5784 TcpOutEloop] - Auto load balanced connection strategy shutdown finished
07-12-2022 14:21:19.265 +0200 INFO  TcpOutputProc [6160 indexerPipe_0] - Received shutdown control key.
07-12-2022 14:21:19.265 +0200 INFO  IndexProcessor [6160 indexerPipe_0] - shutting down: start
.....
Rest of the shut down of pipeline 0, identical as posted before

 


Here is what's in splunkd_access.log around the same time. I have masked a few thing, 1.2.3.4 would be my monitoring console for instance:

 

127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:02.542 +0200] "GET //services/server/roles?output_mode=json HTTP/1.0" 200 787 "-" "Python-httplib2/0.13.1 (gzip)" - - - 1ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:02.569 +0200] "GET //services/cluster/config?output_mode=json HTTP/1.0" 200 2610 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:17.739 +0200] "GET /services/properties/telemetry/general/sendSupportUsage HTTP/1.0" 200 5 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:17.751 +0200] "GET //services/server/roles?output_mode=json HTTP/1.0" 200 787 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:17.763 +0200] "GET //services/cluster/config?output_mode=json HTTP/1.0" 200 2610 "-" "Python-httplib2/0.13.1 (gzip)" - - - 1ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:17.787 +0200] "GET //services/server/roles?output_mode=json HTTP/1.0" 200 787 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:17.799 +0200] "GET //services/cluster/config?output_mode=json HTTP/1.0" 200 2610 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:17.809 +0200] "GET /services/properties/telemetry/general/sendSupportUsage HTTP/1.0" 200 5 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:17.821 +0200] "GET //services/server/roles?output_mode=json HTTP/1.0" 200 787 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:17.833 +0200] "GET //services/cluster/config?output_mode=json HTTP/1.0" 200 2610 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:18.079 +0200] "GET //services/server/roles?output_mode=json HTTP/1.0" 200 787 "-" "Python-httplib2/0.13.1 (gzip)" - - - 1ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:18.091 +0200] "GET //services/cluster/config?output_mode=json HTTP/1.0" 200 2610 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:18.103 +0200] "GET //servicesNS/nobody/splunk_secure_gateway/storage/collections/data/meta/soc2_opt_in HTTP/1.0" 404 140 "-" "Python-httplib2/0.13.1 (gzip)" - - - 2ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:18.116 +0200] "GET //services/server/info?output_mode=json HTTP/1.0" 200 2074 "-" "Python-httplib2/0.13.1 (gzip)" - - - 1ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:18.912 +0200] "GET /services/server/info HTTP/1.1" 200 5935 "-" "curl" - - - 1ms
1.2.3.4 - - [12/Jul/2022:14:21:23.318 +0200] "POST /services/admin/auth-tokens HTTP/1.1" 201 871 "-" "Splunk/9.0.0 (Windows Server 10 Standard Edition; arch=x64)" - - - 1ms
1.2.3.4 - splunk-system-user [12/Jul/2022:14:21:23.323 +0200] "GET /services/server/info HTTP/1.1" 200 1477 "-" "Splunk/9.0.0 (Windows Server 10 Standard Edition; arch=x64)" - - - 1ms
1.2.3.4 - - [12/Jul/2022:14:21:23.344 +0200] "POST /services/admin/auth-tokens HTTP/1.1" 201 871 "-" "Splunk/9.0.0 (Windows Server 10 Standard Edition; arch=x64)" - - - 2ms
1.2.3.4 - splunk-system-user [12/Jul/2022:14:21:23.355 +0200] "GET /services/admin/bundles/SplunkMonitoringConsole?count=-1 HTTP/1.1" 200 1068 "-" "Splunk/9.0.0 (Windows Server 10 Standard Edition; arch=x64)" - - - 1ms
1.2.3.4 - splunk-system-user [12/Jul/2022:14:21:23.363 +0200] "GET /services/data/indexes?mode=minimum&count=0&datatype=all HTTP/1.1" 200 1351 "-" "Splunk/9.0.0 (Windows Server 10 Standard Edition; arch=x64)" - - - 1ms
1.2.3.4 - splunk-system-user [12/Jul/2022:14:21:23.370 +0200] "GET /services/messages?count=1000&search=name%21%3Dremote:* HTTP/1.1" 200 493 "-" "Splunk/9.0.0 (Windows Server 10 Standard Edition; arch=x64)" - - - 1ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:32.501 +0200] "GET /services/properties/telemetry/general/sendSupportUsage HTTP/1.0" 200 5 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:32.513 +0200] "GET //services/server/roles?output_mode=json HTTP/1.0" 200 787 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:32.539 +0200] "GET //services/cluster/config?output_mode=json HTTP/1.0" 200 2610 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:32.544 +0200] "GET //services/server/roles?output_mode=json HTTP/1.0" 200 787 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:32.565 +0200] "GET //services/cluster/config?output_mode=json HTTP/1.0" 200 2610 "-" "Python-httplib2/0.13.1 (gzip)" - - - 1ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:32.600 +0200] "GET /services/properties/telemetry/general/sendSupportUsage HTTP/1.0" 200 5 "-" "Python-httplib2/0.13.1 (gzip)" - - - 0ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:32.611 +0200] "GET //services/server/roles?output_mode=json HTTP/1.0" 200 787 "-" "Python-httplib2/0.13.1 (gzip)" - - - 1ms
127.0.0.1 - splunk-system-user [12/Jul/2022:14:21:32.633 +0200] "GET //services/cluster/config?output_mode=json HTTP/1.0" 200 2610 "-" "Python-httplib2/0.13.1 (gzip)" - - - 1ms

 

 

I went over several of these shut downs, and there is no mention of _reload in splunkd_access.log around or before any of those occurrences.
I guess even if it was, it should not just reload itself and/or shut down, but I guess it would have been a clue as to what is going on.

0 Karma

mortenklow
Explorer

I have two heavy forwarders running behind load balancer, and I currently have four parallel pipelines on each.
I just now had three pipelines shut down on one of them and two pipelines shut down on the other one, within minutes of each other, right as people starts logging on to work. So it might be either load dependent, or just in general a higher chance for this to happen with more logs flowing through I guess.

For now I'm going to move the UFs back to the old solution. where they would use tcpout directly to the indexers. Not ideal for our needs, but that would have to work for now.

I should probably open a support ticket about this (after my four weeks of vacation, which starts in a few days), if you or anyone else doesn't have any more idea what to test or check here.

0 Karma

hrawat_splunk
Splunk Employee
Splunk Employee

Ok we found a known issue that is only applicable for following workflow.

>Data flow is as follows:
>UFs sending logs with [httpout] ->

 

New httpout when local UF is shutting down is able to pass down shutdown signal to next layer(HF/IDX). Causing to shutdown the pipeline that received it.

It been fixed by SPL-210081. Back ported to 8.1.8/8.2.5.

 

 

0 Karma

mortenklow
Explorer

Thanks for the feedback!

It's a bit strange though, since we were told by support at November 4th in our support case that it was fixed by SPL-224974, which is a fix for a vulnerability listed here: https://www.splunk.com/en_us/product-security/announcements/svd-2022-1112.html

We were also told that the versions that includes the fix is 9.0.2, 8.1.12 and 8.2.9.

Maybe it is two different bugs?


Either way, I haven't been able to test this with any of the new versions yet, and I probably wont until after new year.

0 Karma

hrawat_splunk
Splunk Employee
Splunk Employee

The root cause of the problem is  SPL-210081.   Fixed on httpout side. Requires upgrade to Forwarder.

SPL-224974 is re-fixing SPL-210081. Fixed on the server side.  This option is better because

  1.  Don't have to upgrade tons of forwarders. Just upgrade HF(or whichever layer is receiving httpout)
  2.  It will also take care of  all s2s clients( splunk or 3rd party software) .

SPL-229622 is new fix that is likely to make into 9.0.3 is another enhanced version of SPL-224974. But you should be fine with  SPL-210081(UF upgrade) or SPL-224974(HF upgrade).

mortenklow
Explorer

Thank you for the clarification 🙂

0 Karma

hrawat_splunk
Splunk Employee
Splunk Employee

Whenever you see something like

 Pipeline data does not have indexKey. [_path] = C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe\n[_raw] 

 

The log contains the executable/script name. It simply means that script/exe in the log crashed. 
As of now it's known issue for the exec in question.
https://community.splunk.com/t5/Getting-Data-In/Why-is-splunk-winevtlog-exe-crash-low-thruput-high-c...

Regarding blocked queues, did you restart splunk ? 

request state change from=SHUTDOWN_IN_PROGRESS to=SHUTDOWN_COMPLETE

 Or TcpOut reloaded itself?
One of the reason for TcpOut reload is, if outputs.conf get's modified either by DS or call to tcpout rest endpoint.

Did you see any log "WARN DC:DeploymentClient - Restarting Splunkd" ?

Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...