Splunk Search

Long running searches

sylim_splunk
Splunk Employee
Splunk Employee

On all SearchHead cluster members with ver 8.0.2,  every day we are observing that CPU utilization grows. After roughly two days CPU load grapsh looks like "climbing".

After our analysis we found that several queries are "zombied" and it looks like Splunk does not control them.
These processes runs on Operating System level endlessly like consuming more and more CPU over time.

In UI there is message that "Search auto-canceled"


Always on the end of search.log for such process we see ;

09-28-2020 14:52:57.907 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL
09-28-2020 14:52:57.907 INFO DispatchExecutor - User applied action=CANCEL while status=3
09-28-2020 14:52:58.906 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL
09-28-2020 14:52:58.906 INFO DispatchExecutor - User applied action=CANCEL while status=3
09-28-2020 14:52:59.906 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL

 

Please help.

Labels (1)
Tags (1)
1 Solution

sylim_splunk
Splunk Employee
Splunk Employee

There's a known issue for the version - there's a deadlock situation happening for the version and causing the symptoms of long running searches with the never ending messages you have found.

09-28-2020 14:52:53.906 INFO DispatchExecutor - User applied action=CANCEL while status=3
09-28-2020 14:52:54.906 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL

This has been fixed in 8.0.2.1 and 8.0.3 and also you can add this as a work-around which may have minor search performance impact until you upgrade to the fixed versions.

** work-around
limits.conf on all SH.
[search]
remote_timeline= 0

** fixed versions: 8.0.2.1 and 8.0.3+

View solution in original post

sylim_splunk
Splunk Employee
Splunk Employee

There's a known issue for the version - there's a deadlock situation happening for the version and causing the symptoms of long running searches with the never ending messages you have found.

09-28-2020 14:52:53.906 INFO DispatchExecutor - User applied action=CANCEL while status=3
09-28-2020 14:52:54.906 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL

This has been fixed in 8.0.2.1 and 8.0.3 and also you can add this as a work-around which may have minor search performance impact until you upgrade to the fixed versions.

** work-around
limits.conf on all SH.
[search]
remote_timeline= 0

** fixed versions: 8.0.2.1 and 8.0.3+

esalesap
Path Finder

It's happening to me on version 8.0.3 right now.

0 Karma
Get Updates on the Splunk Community!

See just what you’ve been missing | Observability tracks at Splunk University

Looking to sharpen your observability skills so you can better understand how to collect and analyze data from ...

Weezer at .conf25? Say it ain’t so!

Hello Splunkers, The countdown to .conf25 is on-and we've just turned up the volume! We're thrilled to ...

How SC4S Makes Suricata Logs Ingestion Simple

Network security monitoring has become increasingly critical for organizations of all sizes. Splunk has ...