Reading up on EFS, its seems ideally designed for a tool such as Splunk and, since it is elastic, means you can easily eliminate any disk space issues.
Specifically the AWS site says:
It’s designed for high availability and durability, and provides performance for a broad spectrum of workloads and applications, including Big Data and analytics, media processing workflows, content management, web serving, and home directories.
I am interested to hear views on using it and whether anybody has attempted to use it?
Hello @swilsonGresham I just came across the question when I was looking for the very similar information. As of right now I have designed my splunk in AWS to just use EBS with regular IOPS , I expect around 50 GB/day data , I just thought to use EFS only today but I wanted to see how the community and people's experience was there as @dwaddle pointed out NFS v4 which provides the good ssd with little low IOPS level performance , I am not sure what would my splunk do , so if you have used EFS and would like to share the thoughts It would be awesome.
It might do well for cold buckets. But even then I'm not really sure.
From Amazon's FAQs on the product:
All file systems deliver a consistent baseline performance of 50 MB/s per TB of storage, all file systems
(regardless of size) can burst to 100 MB/s, and file systems larger than 1TB can burst to 100 MB/s per TB of storage.
When you have multiple Splunk nodes trying to hit an EFS volume in parallel and you have an aggregate of 100MB/sec/TB across all of your indexers - you would need to do the math to be sure that given the size of your volume and number of indexers that you don't create a bottleneck.
December 2018 update: Some folks have tried to use EFS for certain Splunk architectures and it has not gone well. As @treinke discovered, use of EFS combined with Splunk Clustering has some distinct issues that make the cluster non-operative. Proceed with caution.
The problem with EFS is that it is mounted as a single large drive. The size? 8.0E. What I was seeing was that the available size was large enough to cause a negative wrap on the value. This was causing Splunk to show the available size as a negative value and causing Splunk to shutdown do to the available storage is too low. I opened a support ticket (950970) on this issue.