Splunk Search

Long running searches

sylim_splunk
Splunk Employee
Splunk Employee

On all SearchHead cluster members with ver 8.0.2,  every day we are observing that CPU utilization grows. After roughly two days CPU load grapsh looks like "climbing".

After our analysis we found that several queries are "zombied" and it looks like Splunk does not control them.
These processes runs on Operating System level endlessly like consuming more and more CPU over time.

In UI there is message that "Search auto-canceled"


Always on the end of search.log for such process we see ;

09-28-2020 14:52:57.907 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL
09-28-2020 14:52:57.907 INFO DispatchExecutor - User applied action=CANCEL while status=3
09-28-2020 14:52:58.906 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL
09-28-2020 14:52:58.906 INFO DispatchExecutor - User applied action=CANCEL while status=3
09-28-2020 14:52:59.906 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL

 

Please help.

Labels (1)
Tags (1)
1 Solution

sylim_splunk
Splunk Employee
Splunk Employee

There's a known issue for the version - there's a deadlock situation happening for the version and causing the symptoms of long running searches with the never ending messages you have found.

09-28-2020 14:52:53.906 INFO DispatchExecutor - User applied action=CANCEL while status=3
09-28-2020 14:52:54.906 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL

This has been fixed in 8.0.2.1 and 8.0.3 and also you can add this as a work-around which may have minor search performance impact until you upgrade to the fixed versions.

** work-around
limits.conf on all SH.
[search]
remote_timeline= 0

** fixed versions: 8.0.2.1 and 8.0.3+

View solution in original post

sylim_splunk
Splunk Employee
Splunk Employee

There's a known issue for the version - there's a deadlock situation happening for the version and causing the symptoms of long running searches with the never ending messages you have found.

09-28-2020 14:52:53.906 INFO DispatchExecutor - User applied action=CANCEL while status=3
09-28-2020 14:52:54.906 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL

This has been fixed in 8.0.2.1 and 8.0.3 and also you can add this as a work-around which may have minor search performance impact until you upgrade to the fixed versions.

** work-around
limits.conf on all SH.
[search]
remote_timeline= 0

** fixed versions: 8.0.2.1 and 8.0.3+

esalesap
Path Finder

It's happening to me on version 8.0.3 right now.

0 Karma
Get Updates on the Splunk Community!

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...