Reporting

Skipped Searches on SHC

JDukeSplunk
Builder

Where could I start digging to find out why my Search Head Cluster is skipping so many searches? I want to find out which ones are being skipped and why. Is there a search I can run that will show scheduled searches, and number of skipped?

alt text

1 Solution

burwell
SplunkTrust
SplunkTrust

On the deployer, you can schedule a job to detect skipped scheduled jobs.

`dmc_set_index_internal` search_group=dmc_group_search_head search_group=* sourcetype=scheduler (status="completed" OR status="skipped" OR status="deferred")             | stats count(eval(status=="completed" OR status=="skipped")) AS total_exec, count(eval(status=="skipped")) AS skipped_exec by _time, host, app, savedsearch_name, user, savedsearch_id | where skipped_exec > 0

I run mine once an hour. This will give you the name of the scheduled search. The name of the host that I see is the captain.

In the deployer, you can go to search -> distributed search instance -> currentsearchheadcaptain

There are two panels "Count of Skipped Scheduled Reports" and "Count of Skipped Reports by Name and Reason"

For me, it is max concurrent limit reached.

By the way, Splunk gives an even waiting to adhoc searches versus scheduled searches. If you want to give a higher waiting to scheduled searches and thus users have to wait a bit for adhoc searches to run, you can adjust the max_searches_perc in the [scheduler] stanza in limits.conf. By default it is 50% but you can adjust this.

Read: http://docs.splunk.com/Documentation/Splunk/6.5.2/Report/Configurethepriorityofscheduledreports

View solution in original post

bwgates
Explorer

I know this is an old post, but I had to submit a reply based off something I found to be my issue with this particular subject.

I recently modified the search peers via the deployer for my SHC and later come to realize the search peer configuration didn't take on all the members of the SHC. I'm currently running three members and each of them had a different list of search peers, but a combination of what the actual list of search peers were.

I just had to do a rolling restart on the SHC and everything went back to normal and search peer distribution was corrected. I'm not sure if it was an issue with the deployer not restarting Splunk on all the members or what, but that resolved my issue.

You can look at the DMC view Search > Search Head Clustering > Search Head Clustering: Status and Configuration to verify.

Tags (1)
0 Karma

burwell
SplunkTrust
SplunkTrust

On the deployer, you can schedule a job to detect skipped scheduled jobs.

`dmc_set_index_internal` search_group=dmc_group_search_head search_group=* sourcetype=scheduler (status="completed" OR status="skipped" OR status="deferred")             | stats count(eval(status=="completed" OR status=="skipped")) AS total_exec, count(eval(status=="skipped")) AS skipped_exec by _time, host, app, savedsearch_name, user, savedsearch_id | where skipped_exec > 0

I run mine once an hour. This will give you the name of the scheduled search. The name of the host that I see is the captain.

In the deployer, you can go to search -> distributed search instance -> currentsearchheadcaptain

There are two panels "Count of Skipped Scheduled Reports" and "Count of Skipped Reports by Name and Reason"

For me, it is max concurrent limit reached.

By the way, Splunk gives an even waiting to adhoc searches versus scheduled searches. If you want to give a higher waiting to scheduled searches and thus users have to wait a bit for adhoc searches to run, you can adjust the max_searches_perc in the [scheduler] stanza in limits.conf. By default it is 50% but you can adjust this.

Read: http://docs.splunk.com/Documentation/Splunk/6.5.2/Report/Configurethepriorityofscheduledreports

View solution in original post

JDukeSplunk
Builder

I ended up using this.

index=_internal sourcetype=scheduler status=skipped |stats count(savedsearch_name) as COUNT by savedsearch_name

somesoni2
Revered Legend

This should give you list of skipped searches

index=_internal sourcetype=scheduler status=skipped
0 Karma
.conf21 CFS Extended through 5/20!

Don't miss your chance
to share your Splunk
wisdom in-person or
virtually at .conf21!

Call for Speakers has
been extended through
Thursday, 5/20!