Splunk Search

Long running searches

sylim_splunk
Splunk Employee
Splunk Employee

On all SearchHead cluster members with ver 8.0.2,  every day we are observing that CPU utilization grows. After roughly two days CPU load grapsh looks like "climbing".

After our analysis we found that several queries are "zombied" and it looks like Splunk does not control them.
These processes runs on Operating System level endlessly like consuming more and more CPU over time.

In UI there is message that "Search auto-canceled"


Always on the end of search.log for such process we see ;

09-28-2020 14:52:57.907 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL
09-28-2020 14:52:57.907 INFO DispatchExecutor - User applied action=CANCEL while status=3
09-28-2020 14:52:58.906 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL
09-28-2020 14:52:58.906 INFO DispatchExecutor - User applied action=CANCEL while status=3
09-28-2020 14:52:59.906 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL

 

Please help.

Labels (1)
Tags (1)
1 Solution

sylim_splunk
Splunk Employee
Splunk Employee

There's a known issue for the version - there's a deadlock situation happening for the version and causing the symptoms of long running searches with the never ending messages you have found.

09-28-2020 14:52:53.906 INFO DispatchExecutor - User applied action=CANCEL while status=3
09-28-2020 14:52:54.906 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL

This has been fixed in 8.0.2.1 and 8.0.3 and also you can add this as a work-around which may have minor search performance impact until you upgrade to the fixed versions.

** work-around
limits.conf on all SH.
[search]
remote_timeline= 0

** fixed versions: 8.0.2.1 and 8.0.3+

View solution in original post

sylim_splunk
Splunk Employee
Splunk Employee

There's a known issue for the version - there's a deadlock situation happening for the version and causing the symptoms of long running searches with the never ending messages you have found.

09-28-2020 14:52:53.906 INFO DispatchExecutor - User applied action=CANCEL while status=3
09-28-2020 14:52:54.906 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL

This has been fixed in 8.0.2.1 and 8.0.3 and also you can add this as a work-around which may have minor search performance impact until you upgrade to the fixed versions.

** work-around
limits.conf on all SH.
[search]
remote_timeline= 0

** fixed versions: 8.0.2.1 and 8.0.3+

esalesap
Path Finder

It's happening to me on version 8.0.3 right now.

0 Karma
Get Updates on the Splunk Community!

Index This | I’m short for "configuration file.” What am I?

May 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with a Special ...

New Articles from Academic Learning Partners, Help Expand Lantern’s Use Case Library, ...

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Your Guide to SPL2 at .conf24!

So, you’re headed to .conf24? You’re in for a good time. Las Vegas weather is just *chef’s kiss* beautiful in ...