Deployment Architecture

Orphaned Scheduled Search (cannot delete)

Path Finder

Hi, I'm in a Search Head Cluster environment and while looking at our scheduling load, I found some references to schedule ID's (seemingly from Unix/Linux app) that don't seem to exist.

The report below displays upcoming scheduled searches based on their next execution time.

| rest /servicesNS/-/-/saved/searches
| search disabled=0 is_scheduled=1 next_scheduled_time!=""
| dedup title,next_scheduled_time
| table title cron_schedule next_scheduled_time id | sort next_scheduled_time

This led me to some saved searches that run on cron schedules but cannot be found via .conf files or the REST API. In particular, there are 2 searches from SA-nix "app" that I can't seem to find.

I've tried "grep -R /opt/splunk" on both the deployer and the cluster member nodes. I've also looked all over the API and can't find a reference. The exact ID's are below.

And can be easily found by adding id="*" to the above search.

Has anyone experienced these "orphaned" searches before? As you can guess, I used to have SA-Unix (part of this app), but it was removed (maybe improperly) as we migrated from a single-host doing everything to a true multi-host cluster.


This is what I did to solve the issue. I reassigned them to an existing valid splunk user and then deleted them via the GUI.

You'll need the URL Encoded version of the saved_search and the URL which you luckily already have. The end of the curl command you'll need the username of the valid user you're going to run the command as (for me I use my admin user).
This is the generic version:

curl -k  https://[]:8089/servicesNS/nobody/search/saved/searches/[URL ENCODED SEARCH NAME]/acl  -d owner=[NEW OWNER USERNAME] -d sharing=app --user [VALID USER to "sign in as"]

You'll probably run something like this.

curl -k -d owner=[NEW OWNER USERNAME] -d sharing=app --user [VALID USER to "sign in as"]

See if that fixes it.

Path Finder

Maybe I shouldn't have used orphaned in the title. Really the problem was I couldn't find the source app for these, not that I couldn't delete them (though trust me, I've been through that problem before....stupid metadata files). I'm 99% sure these are searches running on the indexers themselves due to the SA-nix app being installed there. I don't own the index tier in my environment so I was confused as to why my Search Head would have these searches but I think it's just the nature of the SH -> IDX connectivity that causes these to display.

0 Karma
Get Updates on the Splunk Community!

.conf24 | Day 0

Hello Splunk Community! My name is Chris, and I'm based in Canberra, Australia's capital, and I travelled for ...

Enhance Security Visibility with Splunk Enterprise Security 7.1 through Threat ...

 (view in My Videos)Struggling with alert fatigue, lack of context, and prioritization around security ...

Troubleshooting the OpenTelemetry Collector

  In this tech talk, you’ll learn how to troubleshoot the OpenTelemetry collector - from checking the ...