I'm trying to get splunk working with zfs on Linux, which 6.4 supposedly supports, per the release latest release notes:
When I start splunk, I get this:
homePath='/zroot/splunk/var/lib/splunk/audit/db' of index=_audit on unusable filesystem. Validating databases (splunkd validatedb) failed with code '1'. If you cannot resolve the issue(s) above after consulting documentation, please file a case online at http://www.splunk.com/page/submit_issue
I'm not interested in running a separate partition format within a storage container.
I would not do that if I were you and I cared about my data.
Yes, it will work. It might very well even function long term. But
locktest is there for a reason, and setting this variable drops any file lock checks. By doing this you are entering an unsupported area with your data.
Obviously I can't definitively say if or when we will restore support for ZFS on Linux, but I can say we are working on it.
I currently run 14 indexers over ~250 spinning disks with raidz2.
I've been running splunk over zfs for over 3 years.
No problems, in fact I initially ran indexers with several different backends and zfs was the most stable.
My main indexer is a 1U supermicro box with a cost of ~5k.
Box has 12 spinning media drives (raidz2), used Hitachi Ultrastar drives. 2 SSD's for bootup disks. Then install a PCI NVMe drive for your cache drive. (2-4 or so gigabytes per second speed). I'd recommend altering the default settings of zfs so it'll be more aggressive with using the NVMe drive.
Get at least 12 cores in order to remove the annoyance about Enterprise Security complaining <12 cores.
We run a fairly good chunk of our installation on RAIDZ-2's - 288 TB raw capacity across 12 indexers so far, which will be doubled soon. Not one issue so far except for discovering that option, which was needlessly difficult.
As far as performance goes, since I had 12 identical machines, I did a bunch of benchmarks, comparing 3x8 hardware RAID arrays (mirror striped across all three as well as just a linear array), various md(4) and LVM-setups, as well as ZFS mirror and various RAIDZ-setups. All drives are Crucial 1 TB SSDs (M500 I think - don't think we got the M600 here). In general, md(4) sucked massively. LVM fared a bit better, but was still blown out of the water by ZFS. Hardware RAID only surpassed on reads, but fell behind massively on writes, pretty much across the board. The only place ZFS really "failed" was a for-the-hell-of-it 24-drive RAIDZ3, which had the worst performance of all 🙂
In the end, we went with 3 x 8-wide RAIDZ2, because that hit a sweet-spot with performance and redundancy.