Reporting

700+ jobs running created on Jan 1, 1970

sec_team_albara
New Member

Hello,
We have more than 700 jobs with status parsing on the indexer.
We able to delete these jobs only after stopping the splunk service on the SearchHead. But these jobs kept coming back after starting the splunk Service on the SH.
We need your help.
Thanks in advance

Labels (1)
0 Karma

codebuilder
SplunkTrust
SplunkTrust

Run this and examine the output.

| rest /services/search/jobs isSaved=1

My guess is that what you are seeing will be datamodel/report acceleration jobs, or summary indexing, etc.

----
An upvote would be appreciated and Accept Solution if it helps!
0 Karma

nickhills
Ultra Champion

Who owns the jobs - are they all the same user?

If my comment helps, please give it a thumbs up!
0 Karma

sec_team_albara
New Member

the jobs owner are set to blank: they are not specified.

0 Karma

nickhills
Ultra Champion

I guess I should also offer the suggestion to open a ticket with Splunk support to take you through manually removing them too.
That could be the better suggestion if this is a Production instance with important jobs.

If my comment helps, please give it a thumbs up!
0 Karma

sec_team_albara
New Member

I have already tried, after stopping splunk service on the SH, manually deleting folders under $SPLUNK_HOME/var/run/splunk/dispatch. The jobs kept coming back.

0 Karma

nickhills
Ultra Champion

In that case you have something scheduling them.
Find one of the jobs in the inspector, grab something unique(ish) or rare from the search that is running, then grep your $SPLUNK_HOME/etc folder for user/application searches that contain that search term/phrase.

If my comment helps, please give it a thumbs up!
0 Karma

nickhills
Ultra Champion

That would suggest you have some malformed jobs with invalid start times.

What is probably happening is that when you restart, since the jobs are still in the dispatch directory they get resumed.

You could try to manually delete them...
If you understand the risks and the impact of deleting jobs, you can give this a try. Be careful if your currently executing jobs are important to you - or your users.

The basic steps to remove these jobs:
Stop splunk, delete jobs, restart Splunk - watch to see if they come back.

The jobs you are looking for will be in $SPLUNK_HOME/var/run/splunk/dispatch take a look into that folder and see if you can identify just the affected jobs by there name, or metadata - compare this with the 700 jobs in the job inspector if you can.

If there is commonality in the names or format then those are your 'bad jobs'

Stop Splunk on your SH
Selectively delete the job folders for your 700 bad jobs - bearing in mind that their results (which are probably not of much concern) will be lost.
Start Splunk

Check to see if any of them come back.
Take care with the delete!

If my comment helps, please give it a thumbs up!
0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...