Splunk Enterprise

Is this how hot/warm buckets on nutanix object store and worm work?

jariw
Path Finder

Hi

I am writing the implementation document for Splunk on Nutanix.  Thinking about backup for disaster recovery, data on Nutanix object store (so Smartstore), and the fact that the Nutanix object store is WORM.

I like the deployment to be protected against ransomware attacks. Putting all the data in a backup is not possible because the size is too big and changes too fast.

I see that the Nutanix object store is WORM, so data could not be altered by ransomware.. so far so good. But the data on the indexers in the cache itself? The hot data is not protected with WORM, I think that's  the way it works ( must be some downside somewhere)

But the warm data in the cache? Suppose there is a ransonware attack.. it would like to change the object store but fails. But it will (i think) change the "warm"  data in the cache on the indexers. There is a difference by now between two of the same files (in cache and object store). Ans possible even on the different indexers...


What will splunkd do?  Is it the way I tell it above? Pls. give me your opinion 🙂

greetz

jari

Labels (3)
0 Karma

jariw
Path Finder

Okay, thats another point for consideration. So no WORM for the S3 storage? pitty but understandable.

off course the delete command i never thought about , just because they told to use the retention from the s3 itself.

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

i cannot recall has Nutanix S3 feature for versioning those buckets, and if have, can those “backup versions” be as WORM?

I don’t think that it’s any issue if ransomware crypt warm data on cache as at the same time it also crypts splunk binaries etc => you need re install it anyway  in those cases you lost hot data, but could get warm and cold back from versioned S3.

r. Ismo

0 Karma

richgalloway
SplunkTrust
SplunkTrust

I'm pretty sure Splunk will not like having warm buckets on a WORM file system.  While indexed data never changes, the metadata surrounding the data can change.  This is especially true with accelerated data models, but also applies to tsidx reduction and the delete command (and perhaps other features I can't think of ATM).

---
If this reply helps you, Karma would be appreciated.
0 Karma
Get Updates on the Splunk Community!

Built-in Service Level Objectives Management to Bridge the Gap Between Service & ...

Wednesday, May 29, 2024  |  11AM PST / 2PM ESTRegister now and join us to learn more about how you can ...

Get Your Exclusive Splunk Certified Cybersecurity Defense Engineer Certification at ...

We’re excited to announce a new Splunk certification exam being released at .conf24! If you’re headed to Vegas ...

Share Your Ideas & Meet the Lantern team at .Conf! Plus All of This Month’s New ...

Splunk Lantern is Splunk’s customer success center that provides advice from Splunk experts on valuable data ...