Monitoring Splunk

Why dispatch directory on new Indexer added to the cluster is taking so much space compared to other Indexers ?

damode
Motivator

I am getting the below error message on a new Indexer that I recently added to a cluster (which previously had 2 Indexers)

 

Search peer NEW_INDEXER has the following message: The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch.

 

 Checking disk space on this Indexer, it seems its already filled to 24Gb, whereas, on old Indexers, one is 12GB and other has used 14 GB.

Why is there so much difference in disk space used for this directory between all Indexers ?

Also, please advise how this can be fixed. (other than just extending the directory space, which I had already requested storage team to increase)

0 Karma

chinmoya
Communicator

Hi,

You can remove this message bu updating the settings

Goto : Settings>Server settings General settings

Reduce - Pause indexing if free disk space (in MB) falls below    from 5000 to 500.

Please note this is just a configuration which changes the setting to generate the message when the disk space falls below 500MB insted of 5GB(your current setting)

As to why your disk space on the new indexer is taking this amount of space, would need a detailed investigation of your confs.
If environment is clustered - can you perform 1 check

On the cluster master . Goto

Settings > Indexer Clustering > Indexes >  Bucket Status
Check if there is anything under Indexes with Excess Buckets

0 Karma
Get Updates on the Splunk Community!

Splunk Decoded: Service Maps vs Service Analyzer Tree View vs Flow Maps

It’s Monday morning, and your phone is buzzing with alert escalations – your customer-facing portal is running ...

What’s New in Splunk Observability – September 2025

What's NewWe are excited to announce the latest enhancements to Splunk Observability, designed to help ITOps ...

Fun with Regular Expression - multiples of nine

Fun with Regular Expression - multiples of nineThis challenge was first posted on Slack #regex channel ...