Hello,
Just have a question, about a test that I have in progress.
I have one indexer cluster, with two servers.
I decided to move, the colddb path, on a san share, mounted on both indexers.
For on test index, I copy all the colddb buckest into the new space on the SAN, then, I change the configuration of my test index, push it on both indexers with the master node, and all seems to be OK.
My question is about, how will splunk manage the colddb, it both indexers are pointing on the same share ?
There is something that I'm not sure, splunk indexer will manage properly the fact, if indexer1 already save the bucket in the new colddb path ? or I will have buckets in double.
Thanks for your clarifications
You mean - for example - having //1.2.3.4/idx1 and //1.2.3.4/idx2 mounted to /srv/splunk_cold on idx1 and idx2 respectively? Yes, that will work.
Of course performance of searching over NFS will not be stellar and you might regret not using local storage but from the technical point of view it will work.
That sounds like a good recipe for indexer confusion, as they both assume that the buckets on the share are only managed by themselves.
On the off-chance that your SAN supports a S3 API, you may be able to set up SmartStore, which would put the data redundancy and availability responsibilities on the SAN instead of the indexers.
https://docs.splunk.com/Documentation/Splunk/9.2.2/Indexer/AboutSmartStore
Hi marnall, thanks for the answer.
So there is no chance to put cold datas on a NFS network storage without implementing smartstore ?
No. Regardless of whether it's Splunk or any other solution that assumes it has full control over its data (in this case - contents of the colddb directory) configuring multiple instances of "something" over the same set of data is a pretty sure way to disaster.
BTW, smartstore works differently than your normal storage tiering. Since it's an object storage and you can't just access files randomly, it uses a cache manager to bring whole buckets to cache when they're needed. It is good with some use cases but with others (frequent searching across multiple historical buckets not fitting on warm storage in total) it can cause performance headaches.
Ok I understand.
And considering having two different NFS volumes on a SAN, one volume for each indexer, but the mounting point on the OS will have the same name for both indexers
Can this solution work ?
You mean - for example - having //1.2.3.4/idx1 and //1.2.3.4/idx2 mounted to /srv/splunk_cold on idx1 and idx2 respectively? Yes, that will work.
Of course performance of searching over NFS will not be stellar and you might regret not using local storage but from the technical point of view it will work.
The storage will on a DELL EMC storage, and considering the splunk recomandations and the SAN caracteristics, it will work smoothly, on the paper.
I will test with few index and check
Thanks a lot for your help !