Getting Data In

How to delete 'DONE' jobs in a Search Head Cluster

season88481
Contributor

Hi guys,

Is there a way to delete a DONE or running job in a Search Head Cluster?

Currently some of my users constantly hitting their disk space usage limit. I tried to delete their jobs (or let them to delete their own jobs), but every time I hit the 'Delete' button in 'Job Manager' page, nothing actually happen. I used the below search query to identify if the disk space is actually being clean-up:


| rest splunk_server=local /services/search/jobs
| eval diskUsageMB=diskUsage/1024/1024
| rename eai:acl.owner AS owner, optimizedSearch AS searchQuery
| stats sum(diskUsageMB) AS diskUageMB by sid owner searchQuery
| table owner searchQuery diskUsageMB
| search owner = xxx
| addcoltotals labelfield=owner

This search confirms that jobs are still using disk space quota even after 'Delete'.

Any help will be much appreciated.

Cheers,

naidusadanala
Communicator

Usually the search jobs will expire automatically after 10 minutes,

If they are running lots of searches you need increase

srchDiskQuota the maximum disk space in MB to store the results of the searches, increase this one if you are planning to retrieve a lot of results , in authorize.conf

0 Karma

season88481
Contributor

Hi naidusadanala,

Thanks for your response. I know how to increase disk quota for users. But this question is actually asking how to delete a job in a Search Head Cluster environment.

In single search head, when users delete their jobs, the jobs will be disappear immediately. But this doesn't work at SHC.

Cheers,
Season

0 Karma

risgupta
Path Finder

You might need to check the dispatch jobs for your Splunk servers. You can manually remove that to clean the disk space.

season88481
Contributor

Hi risgupta,

Thanks for your response. Normal users should have ability to delete their own jobs. They have no access to the Splunk server, so can manually remove the dispatch files.

And this manual approach will not work in larger environment, Splunk admin will not have bandwidth to remove all 'DONE' jobs in a SHC.

Cheers,
Season

0 Karma

risgupta
Path Finder

So for that, there is a specific command which is
./splunk clean dispatch
which will clean the jobs for you and there you can apply a cron schedule to run this command to run after evey 10 to 15 minutes (what ever is best for you).

Along with that you can set the dispatch.ttl limit for the limits.conf
For more details you can check this
https://www.splunk.com/blog/2012/09/12/how-long-does-my-search-live-default-search-ttl.html

Let me know how it goes.

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...