Knowledge Management

Best practices for installing Splunk with a NAS

olivier120987
New Member

Hey there I want to install Splunk (standalone) on one machine that's got a NAS drive mounted. I know best practices say I should install or at least keep my indexes on /opt/, for performance matters.

I have 5.6G free on /opt/ which could lead to some space disk issues.. and 5T on a mounted partition (NAS).

I've installed it on my /mounted_NAS/ for now, but I meant to double check with you guys whether it was recommended to achieve this differently. Like, I could install everything on /opt/ (5G) and have my indexes sat on /mounted/NAS/ (5T).

Or I could just leave everything installed on /mounted_NAS.

Worst case scenario I could also partition my disk and grant more space to /opt/ (10-15G), I'd like to avoid re-doing this.

Thoughts?
Thank you

0 Karma
1 Solution

renems
Communicator

As Martin mueller already mentioned: your performance on indexing-side greatly depends on how fast your storage is. In my experience, a NAS is 9-out-of-10 not the fastest storage.

However:
Most searches run in splunk, are over the last 24hours or so. Your newly written data is also pretty recent. So it's particular the hot/warm buckets that has the largest impact (by far).

My recommendation would be to store your hot buckets on local storage (/opt?) assuming that this is the fastest storage you have available, and store the rest on your NAS. An easy way to achieve this, is by using volumes in your indexes.conf. It is explained here in a little more detail.

View solution in original post

0 Karma

renems
Communicator

As Martin mueller already mentioned: your performance on indexing-side greatly depends on how fast your storage is. In my experience, a NAS is 9-out-of-10 not the fastest storage.

However:
Most searches run in splunk, are over the last 24hours or so. Your newly written data is also pretty recent. So it's particular the hot/warm buckets that has the largest impact (by far).

My recommendation would be to store your hot buckets on local storage (/opt?) assuming that this is the fastest storage you have available, and store the rest on your NAS. An easy way to achieve this, is by using volumes in your indexes.conf. It is explained here in a little more detail.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

If you're looking for anything closely resembling performance, you'll want fast large disks for your indexes.
If you're just playing around, any disk will usually do.
Where in the filesystem you put your indexes - /opt, /foo, etc. - doesn't matter. What matters is the IO below that.
If 10-15G would be enough depends on what you're doing with that instance.

TL;DR: It depends.

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Index This | What travels the world but is also stuck in place?

April 2026 Edition  Hayyy Splunk Education Enthusiasts and the Eternally Curious!   We’re back with this ...

Discover New Use Cases: Unlock Greater Value from Your Existing Splunk Data

Realizing the full potential of your Splunk investment requires more than just understanding current usage; it ...

Continue Your Journey: Join Session 2 of the Data Management and Federation Bootcamp ...

As data volumes continue to grow and environments become more distributed, managing and optimizing data ...