Splunk Search

Long running searches

sylim_splunk
Splunk Employee
Splunk Employee

On all SearchHead cluster members with ver 8.0.2,  every day we are observing that CPU utilization grows. After roughly two days CPU load grapsh looks like "climbing".

After our analysis we found that several queries are "zombied" and it looks like Splunk does not control them.
These processes runs on Operating System level endlessly like consuming more and more CPU over time.

In UI there is message that "Search auto-canceled"


Always on the end of search.log for such process we see ;

09-28-2020 14:52:57.907 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL
09-28-2020 14:52:57.907 INFO DispatchExecutor - User applied action=CANCEL while status=3
09-28-2020 14:52:58.906 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL
09-28-2020 14:52:58.906 INFO DispatchExecutor - User applied action=CANCEL while status=3
09-28-2020 14:52:59.906 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL

 

Please help.

Labels (1)
Tags (1)
1 Solution

sylim_splunk
Splunk Employee
Splunk Employee

There's a known issue for the version - there's a deadlock situation happening for the version and causing the symptoms of long running searches with the never ending messages you have found.

09-28-2020 14:52:53.906 INFO DispatchExecutor - User applied action=CANCEL while status=3
09-28-2020 14:52:54.906 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL

This has been fixed in 8.0.2.1 and 8.0.3 and also you can add this as a work-around which may have minor search performance impact until you upgrade to the fixed versions.

** work-around
limits.conf on all SH.
[search]
remote_timeline= 0

** fixed versions: 8.0.2.1 and 8.0.3+

View solution in original post

sylim_splunk
Splunk Employee
Splunk Employee

There's a known issue for the version - there's a deadlock situation happening for the version and causing the symptoms of long running searches with the never ending messages you have found.

09-28-2020 14:52:53.906 INFO DispatchExecutor - User applied action=CANCEL while status=3
09-28-2020 14:52:54.906 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL

This has been fixed in 8.0.2.1 and 8.0.3 and also you can add this as a work-around which may have minor search performance impact until you upgrade to the fixed versions.

** work-around
limits.conf on all SH.
[search]
remote_timeline= 0

** fixed versions: 8.0.2.1 and 8.0.3+

esalesap
Path Finder

It's happening to me on version 8.0.3 right now.

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...