Deployment Architecture

For non-clustered search heads in a distributed environment, do I need to apply the same hot/cold bucket capacity planning as my indexers?

Jrubalcaba
Explorer

For a 100% Virtual Environment:

I am planning to deploy Splunk 6.5 under Linux RHEL 7.2 in a Distributed Search Architecture. My indexers are going to be clustered with the Splunk application. I am performing capacity planning for HOT/COLD buckets at each indexer. I will have two NON-Clustered Search Heads .

For both Search Heads, do I need to apply the same HOT/COLD bucket principle or can I just assign a local disk as part of the VMDK for storage?

I will appreciate any feedback,
Thanks,
Jordi

0 Karma
1 Solution

jtacy
Builder

As long as you're forwarding data from your search heads to your indexers as described in http://docs.splunk.com/Documentation/Splunk/6.5.0/DistSearch/Forwardsearchheaddata you'll be fine assigning storage as you would any other app server. The things that use space on the search head, such as search results, don't have the concept of hot/cold.

If you choose not to forward data from the search head, you'll end up with at least the internal indexes including _internal and _audit on your search head, plus any summary indexes you create. You might need to manage hot/cold on your search head if you take that approach.

View solution in original post

jtacy
Builder

As long as you're forwarding data from your search heads to your indexers as described in http://docs.splunk.com/Documentation/Splunk/6.5.0/DistSearch/Forwardsearchheaddata you'll be fine assigning storage as you would any other app server. The things that use space on the search head, such as search results, don't have the concept of hot/cold.

If you choose not to forward data from the search head, you'll end up with at least the internal indexes including _internal and _audit on your search head, plus any summary indexes you create. You might need to manage hot/cold on your search head if you take that approach.

Jrubalcaba
Explorer

Thank you jtacy, I will forward it to the indexers, since it has several advantages. Therefore I can simply create a regular local disk for the Linux OS. Is there a minimum HDD size for search heads recommended by Splunk?

0 Karma

jtacy
Builder

Looks like Splunk posted a recommendation from the docs, but your actual usage will mostly vary depending on the type of searches people run and how long the results are retained. I've gotten away with much less than the recommendation but users can burn a lot of disk rapidly if their Splunk roles have liberal quotas. Regardless of the size you choose, I would make sure to have disk space monitoring in place before your go-live. This is already the norm on RHEL, but I would also make sure to use LVM on all file systems where Splunk lives so you can easily expand. Have fun, it's going to be great!

0 Karma

Steve_G_
Splunk Employee
Splunk Employee
0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...