Monitoring Splunk

Why dispatch directory on new Indexer added to the cluster is taking so much space compared to other Indexers ?

damode
Motivator

I am getting the below error message on a new Indexer that I recently added to a cluster (which previously had 2 Indexers)

 

Search peer NEW_INDEXER has the following message: The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch.

 

 Checking disk space on this Indexer, it seems its already filled to 24Gb, whereas, on old Indexers, one is 12GB and other has used 14 GB.

Why is there so much difference in disk space used for this directory between all Indexers ?

Also, please advise how this can be fixed. (other than just extending the directory space, which I had already requested storage team to increase)

0 Karma

chinmoya
Communicator

Hi,

You can remove this message bu updating the settings

Goto : Settings>Server settings General settings

Reduce - Pause indexing if free disk space (in MB) falls below    from 5000 to 500.

Please note this is just a configuration which changes the setting to generate the message when the disk space falls below 500MB insted of 5GB(your current setting)

As to why your disk space on the new indexer is taking this amount of space, would need a detailed investigation of your confs.
If environment is clustered - can you perform 1 check

On the cluster master . Goto

Settings > Indexer Clustering > Indexes >  Bucket Status
Check if there is anything under Indexes with Excess Buckets

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...