Monitoring Splunk

Splunk Dispatch issue: Why is the monitoring console not showing any search?

mbasharat
Builder

Hi,

Started seeing this error today:

Dispatch Command: THe minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch

Is cleanup needed in above directory for only a few respective jobs? Now I can't run any search and the error is still showing up. Even Monitoring console is not showing any search. Does it mean that there is no space available OR something is broken? This is on one search head only.

Thanks,

0 Karma
1 Solution

ranjitbrhm1
Communicator

This is classic disk full scenario. Go to splunk console and type df -l to verify what exactly is going on. If splunk is the one filling up space, one quick way to release space will be to go to your indexes in splunk and reduce the size of the indexes of unwanted indexes which will force a delete of excess data from your unwanted indexes. Long time solution will of course be to buy more storage and re assessing your retention policy. Also like above mentioned, increasing the free disk space might help based on how full your drive is.

View solution in original post

0 Karma

ranjitbrhm1
Communicator

This is classic disk full scenario. Go to splunk console and type df -l to verify what exactly is going on. If splunk is the one filling up space, one quick way to release space will be to go to your indexes in splunk and reduce the size of the indexes of unwanted indexes which will force a delete of excess data from your unwanted indexes. Long time solution will of course be to buy more storage and re assessing your retention policy. Also like above mentioned, increasing the free disk space might help based on how full your drive is.

0 Karma

mbasharat
Builder

df -l shows me that /opt is 100% and /var is 81%.

Which manual cleanups can be done to free some space quick? and offcourse the resizing and retention will be checked shortly.

0 Karma

ranjitbrhm1
Communicator

Just like i said earlier, login to splunk, go to indexes, select an index that you dont really need and change the size. It will force a delete. Lets say one index is around 300 GB of space. You can change it to lets say 250 GB and force a delete of some data.

0 Karma

mbasharat
Builder

I just checked and it looks like the folks who have configured all indexes, have each index set at max of 500GB which is total cap in development environment on this SH. Odd. Also noticed via UI that Current size shows 1MB and MAX shows 500GB, EventCount 0. I am scratching my head. Thanks in -advance for your responses though! 😉

0 Karma

mbasharat
Builder

Addendum:

Just checked and /opt is allocated @ 500GB and it is 99.4% full. Seems like this is the issue that artifacts that are created etc. when a search is triggered can't be created due to storage running out? Cleaning up old dispatch jobs helped temporarily and searches were running but not anymore now. Awaiting response.

Thanks,

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...