We're building out new Linux Splunk servers on dedicated hardware. These servers have a rather large amount of disk space available (over 2Tb) running on RAID10. They're both going to be indexers with events auto-load balanced between them.
We're debating about how we want to allocate that for index space. Is there any particular reason why we would not want to create a 1TB or 1.5 TB or even 2 TB filesystem? As opposed to say, creating multiple smaller filesystems?
We currently have no requirements for particular indexes so right now everything's just going to go into the same big pot.
Thanks
It's much simpler to keep all of your indexes on one volume unless you have a reason not to.
Your best bet is probably to carve off one or more volumes for the operating system, then allocate the majority of the space as single, dedicated volume to contain all of your Splunk indexes.
Splunk 4.2 may change things, but right now keeping everything together avoids a lot of tuning considerations. See the following for more information:
http://answers.splunk.com/questions/8730/how-to-manage-the-size-of-my-indexes-to-fit-in-my-volumes
You've probably already found this, but you might also want to look over the information here:
http://www.splunk.com/base/Documentation/latest/admin/HowSplunkstoresindexes
It's much simpler to keep all of your indexes on one volume unless you have a reason not to.
Your best bet is probably to carve off one or more volumes for the operating system, then allocate the majority of the space as single, dedicated volume to contain all of your Splunk indexes.
Splunk 4.2 may change things, but right now keeping everything together avoids a lot of tuning considerations. See the following for more information:
http://answers.splunk.com/questions/8730/how-to-manage-the-size-of-my-indexes-to-fit-in-my-volumes
You've probably already found this, but you might also want to look over the information here:
http://www.splunk.com/base/Documentation/latest/admin/HowSplunkstoresindexes
Thanks. That's what I suspected.