Monitoring Splunk

Search Peer indexer Minimum Free Disk Space Reached

brdr
Contributor

I've read some Answers on this issue and understand how to solve by adjusting server.conf. The question i have is how exactly to trace back this error to the object (search, report, alerts, etc...) that causing this issue. We have couple thousand of these objects and multiple search clusters and 20 indexes in the cluster. It would be great to have steps to isolate the offending object.

Thanks,
brdr

Labels (2)
0 Karma

jnudell_2
Builder

Hi @brdr ,

This message is usually an indication of improperly configured storage for indexing operations. If you're running into a situation where your indexers have less than 5GB (the default threshold for this message) of free disk space for the hot/cold storage volumes, you probably have one of the following situations:
1. You have not properly configured indexes.conf settings for volume management that allows Splunk to clean up space as needed for the hot/cold volumes.
2. You have not provided the supported default minimum disk space for /opt/splunk (or wherever Splunk is installed) of 300GB and search operations are overfilling that space causing this message.

From your desription, it sounds like #2 in this case. If it were me, I would investigate the server(s) in question to determine which folder is causing the issue (most likely something in /opt/splunk/var). From the CLI (Linux) I would use the following command:

df -sh /opt/splunk

This will show how much storage is being used by each directory in /opt/splunk. If it's var, then I would check var as well:
df -sh /opt/splunk/var

Depending upon which directory below that is causing the issue, there are different steps to take, but you would have an idea of where the offending data resides.

If you have NOT allocated the default minimum of 300GB for /opt/splunk, I would highly recommend that you do that. If the hot/cold data shares the same mount point as /opt/splunk then I would recommend that you review your indexes.conf to implement volume/index management that does not allow your disk to fill up, and rolls data appropriately to do this. Typically, I recommend that configurations be made to leave a 5% - 10% chunk of free space on the volume.

I hope this helps.

nareshinsvu
Builder

I got into similar issue.

My indexers were having huge files under $SPLUNK_HOME/var/run/searchpeers.

Which are images of all unwanted lookup files created on the search heads. After clearing them on search heads, the disk space came down on indexers.

Did you search for such files? or done some housekeeping?

0 Karma

Vijeta
Influencer

@brdr you can use the the DMC and see for search activities , it will give you top 10 long running searches .

0 Karma
Get Updates on the Splunk Community!

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...