Splunk Enterprise

Splunk Indexer chewing up disk space

RajaAhmed
New Member

We have a 12 node Hadoop Cluster and we are using splunk to index all log files (Hbase, Cloudera Manager, Jobtracker, name node, and other assortments of logs). Currently our NAS device where we store all the splunk indexes is getting 100% full (df -h command show 100%).

How can we limit the disk space being used by Splunk indexing to 50 gigs. We have tried 'clean event data' but it filled up within a week.

Tags (1)
0 Karma

lguinn2
Legend

You can set the size of each Splunk index. When the index fills, the oldest data will automatically be rolled out (aka "frozen").

Go to the Splunk Manager and look at the maximum size set for each index. You should set this so that your disk space is not exceeded.

There is also a feature in Splunk that lets you create logical volumes for more detailed control of your disk space for indexes.

Read more in the documentation at Configure Index Size

Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Thanks for the Memories! Splunk University, .conf25, and our Community

Thank you to everyone in the Splunk Community who joined us for .conf25, which kicked off with our iconic ...

Data Persistence in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. What happens if the OpenTelemetry collector ...

Introducing Splunk 10.0: Smarter, Faster, and More Powerful Than Ever

Now On Demand Whether you're managing complex deployments or looking to future-proof your data ...