We have more than 700 jobs with status parsing on the indexer.
We able to delete these jobs only after stopping the splunk service on the SearchHead. But these jobs kept coming back after starting the splunk Service on the SH.
We need your help.
Thanks in advance
Run this and examine the output.
| rest /services/search/jobs isSaved=1
My guess is that what you are seeing will be datamodel/report acceleration jobs, or summary indexing, etc.
I guess I should also offer the suggestion to open a ticket with Splunk support to take you through manually removing them too.
That could be the better suggestion if this is a Production instance with important jobs.
In that case you have something scheduling them.
Find one of the jobs in the inspector, grab something unique(ish) or rare from the search that is running, then grep your $SPLUNK_HOME/etc folder for user/application searches that contain that search term/phrase.
That would suggest you have some malformed jobs with invalid start times.
What is probably happening is that when you restart, since the jobs are still in the dispatch directory they get resumed.
You could try to manually delete them...
If you understand the risks and the impact of deleting jobs, you can give this a try. Be careful if your currently executing jobs are important to you - or your users.
The basic steps to remove these jobs:
Stop splunk, delete jobs, restart Splunk - watch to see if they come back.
The jobs you are looking for will be in $SPLUNK_HOME/var/run/splunk/dispatch take a look into that folder and see if you can identify just the affected jobs by there name, or metadata - compare this with the 700 jobs in the job inspector if you can.
If there is commonality in the names or format then those are your 'bad jobs'
Stop Splunk on your SH
Selectively delete the job folders for your 700 bad jobs - bearing in mind that their results (which are probably not of much concern) will be lost.
Check to see if any of them come back.
Take care with the delete!