Deployment Architecture

Sharing SAN volume for indexer cluster

pachinis
Engager

Dear Splunk experts, Dear community,
I am currently planning a change in our Splunk environment to increase reliablity and scalability. Currently running a single indexer with a number of Search Heads.
The goal is to set up the environment to continue operations in case of any single host outage. Would like to set up a cluster of two indexers for this.
We store indexes on mirrored SAN so that it will be operable if the main node is down - standby will have full copy of data.
It is possible to split volume on SAN to two equal parts, make partitions for the indexers and set Replication factor = 2. In that case we will have four copies of data stored (2 peers * 2 SAN nodes) and twice less volume for indexes.

Is there a better way to store data in our case without number of copies overkill and with no loss of capacity? Setting RF=1 is not an option because half of indexed data will be not available in case of an indexer peer loss.
Can we make two indexer peers work with the same SAN partition for writing and reading data?
Thank you!

Tags (2)
0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi @pachinis,
shared SAN isn't a good idea because in this way you surely haven't good performces in your Splunk.
The best approach is to use the the Splunk Indexers Cluster features so you have all the full data in two different servers, continously aligned between them that can answer to the search request in normal work and manage fail over when one of them is down.
If you want more copies of the data, eventually you can use more servers but not two mirrored servers.

Ciao.
Giuseppe

View solution in original post

gcusello
SplunkTrust
SplunkTrust

Hi @pachinis,
shared SAN isn't a good idea because in this way you surely haven't good performces in your Splunk.
The best approach is to use the the Splunk Indexers Cluster features so you have all the full data in two different servers, continously aligned between them that can answer to the search request in normal work and manage fail over when one of them is down.
If you want more copies of the data, eventually you can use more servers but not two mirrored servers.

Ciao.
Giuseppe

gcusello
SplunkTrust
SplunkTrust

Hi @pachinis,

if this answer solves your need, please accept it for the other people of Community otherwise, please tell us how to help you.

Ciao.

Giuseppe

P.S.: Karma Points are appreciated by all the contributors 😉

0 Karma

codebuilder
SplunkTrust
SplunkTrust

Shared storage is not a good use case in your example, imvho.
Why not carve off two LUN's, present one to each of your two indexers, cluster them, and set your replication factor to 2?

----
An upvote would be appreciated and Accept Solution if it helps!
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...