Deployment Architecture

SHC and IDXC disk hardware requirements

Glasses
Contributor

After recently reviewing 8.2.3 hardware requirements, I noticed my deployment is a bit under spec.

For instance, Splunk recommends 800 IOPs and 300GB for Search Head node disks.

https://docs.splunk.com/Documentation/Splunk/8.2.3/Capacity/Referencehardware#What_storage_type_shou...

Search heads with a high ad-hoc or scheduled search loads should use SSD. A HDD-based storage system must provide no less than 800 sustained IOPS. A search head requires at least 300GB of dedicated storage space.

Indexers should SSD or NVMe drives. 

In my case, I have a dedicated NVMe "data" drive for all indexed data (except _internal) and a SSD drive for the OS and Splunk application (like on the Search Heads).     

Does an indexer require the same 800 IOP / 300GB disk as a Search Head?  Does an indexer need to write information to disk per search execution?   

Also has anyone experienced issues with using the GP3 disks on SHCs or IDXCs ?  (excluding the use as a specific data drive, which I use NVMe)

 

Thank you

0 Karma
1 Solution

isoutamo
SplunkTrust
SplunkTrust

Hi

definitely also indexers must have at least 800IOPS or even more. Usually you will need more than 300 GB disk space on indexers. 
Within every search indexers needs temp/working space for serving the query. In most cases they need to do some data manipulation etc to result sets before sending those back to SH. Also they need to store the query (with lookups etc) locally before they can read results from data disks.

Our clients have used gp3 disks almost as long as those have been available w/o issues. 
r. Ismo

View solution in original post

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

definitely also indexers must have at least 800IOPS or even more. Usually you will need more than 300 GB disk space on indexers. 
Within every search indexers needs temp/working space for serving the query. In most cases they need to do some data manipulation etc to result sets before sending those back to SH. Also they need to store the query (with lookups etc) locally before they can read results from data disks.

Our clients have used gp3 disks almost as long as those have been available w/o issues. 
r. Ismo

View solution in original post

0 Karma

Glasses
Contributor

Sorry I just want to clarify in case I am misunderstanding your recommendations...

Do you think using the following >>>

GP3 >>> 150GB / 3000IOPs / 575 MiB/sec

for SHC members ?

And for IDXC members ?   

Again the IDX cluster nodes use a dedicated NVMe disk for data only, and OS/Splunk Application will reside on the  "local" NON-ephemeral/persistent EBS GP3...

 

Thank you!

0 Karma

isoutamo
SplunkTrust
SplunkTrust
As IOPS are more than 800 (1200) then it’s ok. With gp3 you will get >800 even smaller fs than 300gb, but as SH usually needs some work space for queries that’s what we are mainly using.
0 Karma

Glasses
Contributor

Thank you very much for clarifying.

As we are constantly trying to save costs, we will see if 150GB works for us, especially as our MC doesn't indicate that we are running out of disk space in either cluster.

0 Karma

isoutamo
SplunkTrust
SplunkTrust
With gp3 150 GB is ok, as you get enough IOPS. But with gp2 you need minimum 300 GB to get enough.
0 Karma

Glasses
Contributor

Thank you so much !!!

0 Karma
Did you miss .conf21 Virtual?

Good news! The event's keynotes and many of its breakout sessions are now available online, and still totally FREE!