Knowledge Management

During indexer restart/ indexer cluster rolling restart TcpInputProcessor fails to drain queue.

hrawat
Splunk Employee
Splunk Employee

During graceful indexer/HF restart/stop (basically where ever splunktcp is configured) if you see last entries in metrics.log before splunk finally stops. 
Where splunktcpin queue (name=splunktcpin) shows current_size, largest_size, smallest_size has same value( but parsingqueue to indexqueue none blocked), TcpInputProcessor fails to drain splunktcpin queue despite parsingqueue or indexqueue are empty. 

 

02-18-2024 00:54:28.370 +0000 INFO  Metrics - group=queue, ingest_pipe=1, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1507, largest_size=1507, smallest_size=1507
02-18-2024 00:54:28.370 +0000 INFO  Metrics - group=queue, ingest_pipe=1, name=indexqueue, max_size_kb=10240, current_size_kb=7, current_size=40, largest_size=40, smallest_size=0
02-18-2024 00:54:28.368 +0000 INFO  Metrics - group=queue, ingest_pipe=0, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1148, largest_size=1148, smallest_size=1148
02-18-2024 00:54:28.368 +0000 INFO  Metrics - group=queue, ingest_pipe=0, name=indexqueue, max_size_kb=10240, current_size_kb=7, current_size=40, largest_size=40, smallest_size=0
02-18-2024 00:53:57.364 +0000 INFO  Metrics - group=queue, ingest_pipe=1, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1507, largest_size=1507, smallest_size=1507
02-18-2024 00:53:57.364 +0000 INFO  Metrics - group=queue, ingest_pipe=1, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0
02-18-2024 00:53:57.362 +0000 INFO  Metrics - group=queue, ingest_pipe=0, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1148, largest_size=1148, smallest_size=1148
02-18-2024 00:53:57.362 +0000 INFO  Metrics - group=queue, ingest_pipe=0, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0
02-18-2024 00:53:26.372 +0000 INFO  Metrics - group=queue, ingest_pipe=1, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1507, largest_size=1507, smallest_size=1507
02-18-2024 00:53:26.372 +0000 INFO  Metrics - group=queue, ingest_pipe=1, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0
02-18-2024 00:53:26.370 +0000 INFO  Metrics - group=queue, ingest_pipe=0, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1148, largest_size=1148, smallest_size=1148
02-18-2024 00:53:26.370 +0000 INFO  Metrics - group=queue, ingest_pipe=0, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0
02-18-2024 00:52:55.371 +0000 INFO  Metrics - group=queue, ingest_pipe=1, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1507, largest_size=1507, smallest_size=0
02-18-2024 00:52:55.371 +0000 INFO  Metrics - group=queue, ingest_pipe=1, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0
02-18-2024 00:52:55.369 +0000 INFO  Metrics - group=queue, ingest_pipe=0, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1148, largest_size=1148, smallest_size=0
02-18-2024 00:52:55.369 +0000 INFO  Metrics - group=queue, ingest_pipe=0, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0
02-18-2024 00:52:24.397 +0000 INFO  Metrics - group=queue, ingest_pipe=1, name=splunktcpin, max_size_kb=500, current_size_kb=0, current_size=0, largest_size=30, smallest_size=0
02-18-2024 00:52:24.396 +0000 INFO  Metrics - group=queue, ingest_pipe=1, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0
02-18-2024 00:52:24.380 +0000 INFO  Metrics - group=queue, ingest_pipe=0, name=splunktcpin, max_size_kb=500, current_size_kb=0, current_size=0, largest_size=16, smallest_size=0
02-18-2024 00:52:24.380 +0000 INFO  Metrics - group=queue, ingest_pipe=0, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0

 

During graceful shutdown pipeline processors are expected to drain the queue.
This issue is fixed in 9.2.1 and 9.1.4. 

Labels (1)
Tags (1)

gjanders
SplunkTrust
SplunkTrust

This looks very useful, is there a recommended way to set the maxSendQSize ?

Do I need to vary it depending on the thruput of the HF per pipeline?

I'm assuming the maxSendQSize would be in-memory buffer/queue per-pipeline in addition to the overall maxQueueSize?

Finally I'm assuming this would be useful when there is no load balancer in front of the indexers?

0 Karma

tej57
Contributor

Thank you for the insights @hrawat .

I believe this should be part of Monitoring Console as well to identify the queue behavior.

 

Thanks,
Tejas.

0 Karma
Get Updates on the Splunk Community!

Developer Spotlight with Paul Stout

Welcome to our very first developer spotlight release series where we'll feature some awesome Splunk ...

State of Splunk Careers 2024: Maximizing Career Outcomes and the Continued Value of ...

For the past four years, Splunk has partnered with Enterprise Strategy Group to conduct a survey that gauges ...

Data-Driven Success: Splunk & Financial Services

Splunk streamlines the process of extracting insights from large volumes of data. In this fast-paced world, ...