Splunk Search

Long running searches

sylim_splunk
Splunk Employee
Splunk Employee

On all SearchHead cluster members with ver 8.0.2,  every day we are observing that CPU utilization grows. After roughly two days CPU load grapsh looks like "climbing".

After our analysis we found that several queries are "zombied" and it looks like Splunk does not control them.
These processes runs on Operating System level endlessly like consuming more and more CPU over time.

In UI there is message that "Search auto-canceled"


Always on the end of search.log for such process we see ;

09-28-2020 14:52:57.907 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL
09-28-2020 14:52:57.907 INFO DispatchExecutor - User applied action=CANCEL while status=3
09-28-2020 14:52:58.906 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL
09-28-2020 14:52:58.906 INFO DispatchExecutor - User applied action=CANCEL while status=3
09-28-2020 14:52:59.906 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL

 

Please help.

Labels (1)
Tags (1)
1 Solution

sylim_splunk
Splunk Employee
Splunk Employee

There's a known issue for the version - there's a deadlock situation happening for the version and causing the symptoms of long running searches with the never ending messages you have found.

09-28-2020 14:52:53.906 INFO DispatchExecutor - User applied action=CANCEL while status=3
09-28-2020 14:52:54.906 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL

This has been fixed in 8.0.2.1 and 8.0.3 and also you can add this as a work-around which may have minor search performance impact until you upgrade to the fixed versions.

** work-around
limits.conf on all SH.
[search]
remote_timeline= 0

** fixed versions: 8.0.2.1 and 8.0.3+

View solution in original post

sylim_splunk
Splunk Employee
Splunk Employee

There's a known issue for the version - there's a deadlock situation happening for the version and causing the symptoms of long running searches with the never ending messages you have found.

09-28-2020 14:52:53.906 INFO DispatchExecutor - User applied action=CANCEL while status=3
09-28-2020 14:52:54.906 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL

This has been fixed in 8.0.2.1 and 8.0.3 and also you can add this as a work-around which may have minor search performance impact until you upgrade to the fixed versions.

** work-around
limits.conf on all SH.
[search]
remote_timeline= 0

** fixed versions: 8.0.2.1 and 8.0.3+

esalesap
Path Finder

It's happening to me on version 8.0.3 right now.

0 Karma
Get Updates on the Splunk Community!

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...

Built-in Service Level Objectives Management to Bridge the Gap Between Service & ...

Wednesday, May 29, 2024  |  11AM PST / 2PM ESTRegister now and join us to learn more about how you can ...

Get Your Exclusive Splunk Certified Cybersecurity Defense Engineer Certification at ...

We’re excited to announce a new Splunk certification exam being released at .conf24! If you’re headed to Vegas ...