Splunk Search

Upgrade to 5.x, some of my existing searches are taking longer to return results. Not as many scheduled searches are running on time. Could fetch_remote_search_log setting be a factor?

Ellen
Splunk Employee
Splunk Employee

After upgrading to 5.x, I noticed that some of my searches are taking a longer time to return results than prior. Search performance has slowed down and not as many scheduled searches are running on time. Could the parameter fetch_remote_search_log be a factor? What is it?

Tags (1)
1 Solution

bmignosa_splunk
Splunk Employee
Splunk Employee

The limits.conf parameter fetch_remote_search_log can contribute to search performance problems for sizable deployments, especially those with a high number of searches and use of summary indexing.

This parameter is enabled by default in 5.x versions; however, btool does not produce a result because there is not a default/limits.conf entry in several of the 5.x versions. There is however an entry in the Splunk spec file in etc/system/README/limits.conf.spec

limits.conf.spec
fetch_remote_search_log =
* If true, will attempt to fetch the search.log from every search peer at the end of the search and store in the job dispatch dir on the search head.
* Defaults to true

This setting is intended to make search logs available to the User on a search head without having to go directly to a search peer and retrieving the search artifact’s search.log.
This functionality is not needed for scheduled searches.

Included are several bugs to disable this setting by default, and for more control over the functionality in terms of enabling it for specific types of searches, and not all.

SPL-76948: disable search.log collection
SPL-76948: Allow a more fine-grained control of remote search.log collection

Below are a few searches to help identify if this setting is a possible factor:

Compare before and after:
index=_internal SavedSplunker | timechart avg(run_time) median(run_time)
index=_internal source=scheduler.log | timechart span=1h count, sum(eval(run_time/60)) as run_time

The following will graph a growing latency.
index=_internal SavedSplunker | eval lag=dispatch_time - scheduled_time | timechart avg(lag)

To address these symptoms the following configuration should be added to search heads.

Eg:
$SPLUNK_HOME/etc/system/local/limits.conf

[search]
fetch_remote_search_log = false

View solution in original post

hrawat_splunk
Splunk Employee
Splunk Employee

Do it ASAP to save network bandwidth usage, reduce search run time and unwanted i/o on SH which goes towards disk quota limit.

0 Karma

bmignosa_splunk
Splunk Employee
Splunk Employee

The limits.conf parameter fetch_remote_search_log can contribute to search performance problems for sizable deployments, especially those with a high number of searches and use of summary indexing.

This parameter is enabled by default in 5.x versions; however, btool does not produce a result because there is not a default/limits.conf entry in several of the 5.x versions. There is however an entry in the Splunk spec file in etc/system/README/limits.conf.spec

limits.conf.spec
fetch_remote_search_log =
* If true, will attempt to fetch the search.log from every search peer at the end of the search and store in the job dispatch dir on the search head.
* Defaults to true

This setting is intended to make search logs available to the User on a search head without having to go directly to a search peer and retrieving the search artifact’s search.log.
This functionality is not needed for scheduled searches.

Included are several bugs to disable this setting by default, and for more control over the functionality in terms of enabling it for specific types of searches, and not all.

SPL-76948: disable search.log collection
SPL-76948: Allow a more fine-grained control of remote search.log collection

Below are a few searches to help identify if this setting is a possible factor:

Compare before and after:
index=_internal SavedSplunker | timechart avg(run_time) median(run_time)
index=_internal source=scheduler.log | timechart span=1h count, sum(eval(run_time/60)) as run_time

The following will graph a growing latency.
index=_internal SavedSplunker | eval lag=dispatch_time - scheduled_time | timechart avg(lag)

To address these symptoms the following configuration should be added to search heads.

Eg:
$SPLUNK_HOME/etc/system/local/limits.conf

[search]
fetch_remote_search_log = false
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...