Splunk Enterprise

Big Indexes best practice

rayar
Contributor

Hi

We have very big indexes (300 GB ) 

Also we have very limited  storage 

is it recommended to split the index to smaller indexes (storage , performance )  ?

 

Labels (1)
0 Karma
1 Solution

rayar
Contributor

Thanks a lot 

We will try to play with it 

View solution in original post

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Using more, smaller indexes will not solve a storage limit problem. In fact, it may make it worse because of the additional metadata needed by Splunk and the OS to store the added indexes.  Be smart about how you index data, however.  Don't put everything into a single index.  Use a new index when access or retention rules demand it.

Some possible solutions:

  1. As @to4kawa suggested, use volumes to help prevent the storage system from filling up.
  2. Add more storage.
  3. Index less data.
  4. Retain your data for shorter times.
---
If this reply helps you, Karma would be appreciated.
0 Karma

isoutamo
SplunkTrust
SplunkTrust

Yes, definitely use volumes! That will save your days many times. 
Try to minimize count of buckets and use auto_high_volumes unless you are using smart store. 
And remember IOPS are your friends. 

0 Karma

rayar
Contributor

Thanks a lot for your inputs 

what about the searches will it improve the search performance to split the data to separate index or the performance will be the same if I filter the huge index with index =www sourcetype=yyy ?

0 Karma

richgalloway
SplunkTrust
SplunkTrust

You can improve search performance by splitting the data among more indexers (servers).

Using more smaller indexes may or may not help since there are other considerations such as the nature of your data and the nature of your searches.  Having to open and unzip many buckets could slow down searches.  OTOH, finding data in a very large index can also be slow.  There's a trade-off and finding the best balance will take some experimentation.

---
If this reply helps you, Karma would be appreciated.
0 Karma

rayar
Contributor

Thanks a lot 

We will try to play with it 

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

it depends what kind of data you have on that index (hosts, source types, sources, cardinality) and how much your daily ingesting volume is? In rule of thumb 300GB didn’t sound much if there is e.g 1 month data and it’s from several sources. I liked to say that probably this amount of data don’t needs any special arrangements yet. When you are talking to 300GB / day / 1 source / host / source type then there maybe a need for some arrangements based on your queries and analyse needs. 

r. Ismo

0 Karma

to4kawa
Ultra Champion
0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

May 2026 Splunk Expert Sessions: Security & Observability

Level Up Your Operations: May 2026 Splunk Expert Sessions Whether you are refining your security posture or ...

Network to App: Observability Unlocked [May & June Series]

In today’s digital landscape, your environment is no longer confined to the data center. It spans complex ...

SPL2 Deep Dives, AppDynamics Integrations, SAML Made Simple and Much More on Splunk ...

Splunk Lantern is Splunk’s customer success center that provides practical guidance from Splunk experts on key ...