Monitoring Splunk

How to change the total disk space use in Splunk (hot, cold data)?

phamxuantung
Communicator

Hello,

Our Splunk system just got an increase in size as image below (we have a Master, 1:1 indexes cluster struture)

image_2022_07_14T01_36_09_921Z.png

Meaning we have an increase for hot from 500GB -> 1T and cold from 1.5T -> 3T

I have change the stanza in splunk/etc/master-apps/_cluster/local/indexes.conf (where we put our individual indexes config like maxTotalDataSizeMB, homePath.MaxDataSizeMB, coldPath.MaxDataSizeMB) to match the newly provide disk space. But after I restart services for both our indexers and master, it won;t apply the newly assign disk space but still using old one. I suspect I miss something here.

Can anyone point me to where can I config overall setting? (Because I'm not familial with splunk structure)

0 Karma
1 Solution

matt8679
Path Finder

You need to deploy the cluster bundle to the peers. A restart will not apply the new settings

On the master:

splunk apply cluster-bundle

View solution in original post

matt8679
Path Finder

You need to deploy the cluster bundle to the peers. A restart will not apply the new settings

On the master:

splunk apply cluster-bundle

phamxuantung
Communicator

I tried running your command on master, but it show

err.PNG

0 Karma

matt8679
Path Finder

Try this in the CLI, path to where splunk is installed on the master indexers:

./splunk apply cluster-bundle

 

 

0 Karma

PickleRick
SplunkTrust
SplunkTrust

I don't understand, to be honest. If you only expanded your storage, and raised your limits it doesn't mean that splunk will automatically fill all the storage space.

Firstly, as @richgalloway pointed out - there are volume settings. But even if you're not using volume-based restrictions, storage utilization depends on:

1) ingestion rate (if you're ingesting - for example - 10MB per day, you won't fill a 1TB drive in 10 days)

2) Size-based retention limits

3) Time-based retention limits

You might have your size limits increased but still buckets would get deleted if they got too old.

phamxuantung
Communicator

1) We ingest 600GB data on a daily basis, and the old storage is too small for our need.

2) Sized base retention limit. There is an index that I want to extend, the old limit was

maxTotalDataSizeMB = 480000
homePath.maxDataSizeMB = 180000
coldPath.maxDataSizeMB = 300000

We use this config since the start, and now it's not enough anymore so I set it as

maxTotalDataSizeMB = 1300000
homePath.maxDataSizeMB = 500000
coldPath.maxDataSizeMB = 800000

I restart master, index01, index02 but it doesn't apply the new config.

3) We don't have time-based retention limit

idx.PNG

And we don't have maxVolumnDataSizeMB in the first place, so why, if I don't add it, the new config won't apply when I change the limit for each index.

So I was wonder if there was a step else where that I missed in this?

0 Karma

richgalloway
SplunkTrust
SplunkTrust

If you're using volumes (which is a good idea) then you'll need to adjust the  maxVolumeDataSizeMB setting.

---
If this reply helps you, Karma would be appreciated.
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...