Splunk Search

Why Splunk search jobs stuck at "finalizing job" status for few days?

sdubey_splunk
Splunk Employee
Splunk Employee

Symptoms:

  • It usually happen in the next couple of hours after we manually deleted the stuck search jobs

  • It only happens to particular searches

  • Once it happens, how long the affected search is stuck with the status of "finalizing"?

  • The affected search jobs is stuck with the status of "finalizing" until we manually deleted them.

  • Job status says 100% completed according to job inspection(when viewed via "Inspet Job")

Tags (1)
0 Karma
1 Solution

sdubey_splunk
Splunk Employee
Splunk Employee

Solution:
1. bump up max_chunk_queue_size in limits.conf in the search heads.
That will reduce the necessity for pausing search result collation queues, which makes hitting the issue less likely

limit.conf : Updated below parameters to fix the issue.

[search]
result_queue_max_size = 200000000
max_chunk_queue_size = 5000000
fetch_remote_search_log = false

remote_timeline_fetchall = false

Details about above paramters.

result_queue_max_size =
* The maximum size, in MB, that will be kept from peers for processing on
the search head before throttling the rate that data is accepted.
* The “results_queue_min_size” value takes precedence. The number of search
results chunks specified by “results_queue_min_size” will always be
retained in the queue even if the combined size in MB exceeds the
“result_queue_max_size” value.
* Default: 100

max_chunk_queue_size =
* The maximum size of the chunk queue.
* default: 10000000

Updating above parameters reduced the necessity for pausing search result collation queues, which makes hitting the issue less likely. And this fixed our issue.

View solution in original post

0 Karma

sdubey_splunk
Splunk Employee
Splunk Employee

Solution:
1. bump up max_chunk_queue_size in limits.conf in the search heads.
That will reduce the necessity for pausing search result collation queues, which makes hitting the issue less likely

limit.conf : Updated below parameters to fix the issue.

[search]
result_queue_max_size = 200000000
max_chunk_queue_size = 5000000
fetch_remote_search_log = false

remote_timeline_fetchall = false

Details about above paramters.

result_queue_max_size =
* The maximum size, in MB, that will be kept from peers for processing on
the search head before throttling the rate that data is accepted.
* The “results_queue_min_size” value takes precedence. The number of search
results chunks specified by “results_queue_min_size” will always be
retained in the queue even if the combined size in MB exceeds the
“result_queue_max_size” value.
* Default: 100

max_chunk_queue_size =
* The maximum size of the chunk queue.
* default: 10000000

Updating above parameters reduced the necessity for pausing search result collation queues, which makes hitting the issue less likely. And this fixed our issue.

0 Karma

oren_s
Loves-to-Learn Lots

Are these changes still relevant as of 2025?

 

0 Karma

gjanders
SplunkTrust
SplunkTrust

Great question/answer, please accept your answer unless your waiting for alternative answers!

0 Karma
Get Updates on the Splunk Community!

Fall Into Learning with New Splunk Education Courses

Every month, Splunk Education releases new courses to help you branch out, strengthen your data science roots, ...

Super Optimize your Splunk Stats Searches: Unlocking the Power of tstats, TERM, and ...

By Martin Hettervik, Senior Consultant and Team Leader at Accelerate at Iver, Splunk MVPThe stats command is ...

How Splunk Observability Cloud Prevented a Major Payment Crisis in Minutes

Your bank's payment processing system is humming along during a busy afternoon, handling millions in hourly ...