Deployment Architecture

indexer cluster : colddb moved on a san share

Nicolas2203
Explorer

Hello,

Just have a question, about a test that I have in progress.

I have one indexer cluster, with two servers.
I decided to move, the colddb path, on a san share, mounted on both indexers.
For on test index, I copy all the colddb buckest into the new space on the SAN, then, I change the configuration of my test index, push it on both indexers with the master node, and all seems to be OK.

My question is about, how will splunk manage the colddb, it both indexers are pointing on the same share ?
There is something that I'm not sure, splunk indexer will manage properly the fact, if indexer1 already save the bucket in the new colddb path ? or I will have buckets in double.

Thanks for your clarifications

Labels (2)
0 Karma
1 Solution

PickleRick
SplunkTrust
SplunkTrust

You mean - for example -  having //1.2.3.4/idx1 and //1.2.3.4/idx2 mounted to /srv/splunk_cold on idx1 and idx2 respectively? Yes, that will work.

Of course performance of searching over NFS will not be stellar and you might regret not using local storage but from the technical point of view it will work.

View solution in original post

marnall
Motivator

That sounds like a good recipe for indexer confusion, as they both assume that the buckets on the share are only managed by themselves.

On the off-chance that your SAN supports a S3 API, you may be able to set up SmartStore, which would put the data redundancy and availability responsibilities on the SAN instead of the indexers.

https://docs.splunk.com/Documentation/Splunk/9.2.2/Indexer/AboutSmartStore

Nicolas2203
Explorer

Hi marnall, thanks for the answer.

So there is no chance to put cold datas on a NFS network storage without implementing smartstore ?

 

0 Karma

PickleRick
SplunkTrust
SplunkTrust

No. Regardless of whether it's Splunk or any other solution that assumes it has full control over its data (in this case - contents of the colddb directory) configuring multiple instances of "something" over the same set of data is a pretty sure way to disaster.

BTW, smartstore works differently than your normal storage tiering. Since it's an object storage and you can't just access files randomly, it uses a cache manager to bring whole buckets to cache when they're needed. It is good with some use cases but with others (frequent searching across multiple historical buckets not fitting on warm storage in total) it can cause performance headaches.

Nicolas2203
Explorer

Ok I understand.

And considering having two different NFS volumes on a SAN, one volume for each indexer, but the mounting point on the OS will have the same name for both indexers

Can this solution work ?

0 Karma

PickleRick
SplunkTrust
SplunkTrust

You mean - for example -  having //1.2.3.4/idx1 and //1.2.3.4/idx2 mounted to /srv/splunk_cold on idx1 and idx2 respectively? Yes, that will work.

Of course performance of searching over NFS will not be stellar and you might regret not using local storage but from the technical point of view it will work.

Nicolas2203
Explorer

The storage will on a DELL EMC storage, and considering the splunk recomandations and the SAN caracteristics, it will work smoothly, on the paper.

I will test with few index and check

Thanks a lot for your help !

0 Karma
Get Updates on the Splunk Community!

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...

Cloud Platform & Enterprise: Classic Dashboard Export Feature Deprecation

As of Splunk Cloud Platform 9.3.2408 and Splunk Enterprise 9.4, classic dashboard export features are now ...