Community Blog
Get the latest updates on the Splunk Community, including member experiences, product education, events, and more!

Community Blog

Ingest Actions Output Announces Support for File System Destinations

by Splunk Employee on ‎08-06-2024 08:35 AM - last edited on ‎03-13-2025 01:44 PM by Community Manager

Introduction


As part of Splunk Enterprise 9.3, we are proud to announce the general availability of File System Destination for Ingest Actions. In customer interviews and surveys, local and network file systems were the next highest voted Ingest Actions external output destination (after the currently available offerings!) With this release, customers have even more control over where their data lives, enabling Splunk Admins to flexibly optimize their storage options for
different data tiers.

101: How do I access and use File System Destination?


Accessing this capability is easy for any customer on a 9.3.0+ version of Splunk Enterprise. First navigate to
Settings > Ingest Actions as you normally would. In the Destinations tab, you’ll notice a new option called “File System” when creating a new destination. In this release, Splunk supports up to eight (8) file system destinations on top of the already supported eight (8) Amazon S3 locations for a total of 16 external output destinations!

Image 1.png

In the GUI wizard, create a ‘Destination Title’ and ‘Directory Path’ for writing events to and adjust any of the default settings as necessary. For more information on any of the destination settings, check out the RFS Output stanza in Splunk’s outputs.conf spec.

create new dest.png

Files are written to by continuously appending on the output file until it reaches the default specified in the appendToFileUntilSizeMB setting. This setting is adjustable in the destination configuration UI, in addition to the backend .conf file directly. Files are partitioned by day, and in each day folder, events are separated by hour.

image 3.png

Because the events are formatted in the same way as Ingest Actions Amazon S3 output (e.g. - new line delimited JSON with default fields emitted at minimum), you can selectively upload data (without any additional ETL steps) to an accessible Amazon S3 location and access it using Splunk Federated Search for S3!

image 4.png

201: What if I want to mount a file system with NFS?


If you want to mount a file system using a Network File System (NFS) to enable distributed file sharing, use any NFS v4.1 or higher, provisioned with read/write access.

Step 1: Setup the NFS Server

1.1)  Switch to the root account to make system changes.

sudo su

1.2) Install the necessary packages for the nfs server.

sudo apt update

sudo apt install nfs-kernel-server

1.3) Create an output directory 

sudo mkdir /lorem/ipsum/nfs_server -p

1.4) Change directory ownership to allow NFS to read/write to the folder of your choice

sudo chown nobody:nogroup /lorem/ipsum/nfs_server

1.5) Open the exports file and enable the client IP to access the server directory of interest

sudo vi /etc/exports

/lorem/ipsum/nfs_server 

10.224.24.5(rw,sync,no_subtree_check,no_root_squash,all_squash,insecure)

1.6) Restart the nfs-server to update settings

sudo systemctl restart nfs-kernel-server

1.7) Check the firewall status to know if it is enabled and what is allowed. If disabled, there is no need to do step 8. If the firewall is enabled and NFS traffic is not allowed (traffic from the client to server), then allow it by following step 8.

sudo apt install ufw

sudo ufw status

1.8) Open port 2049 at the server to allow traffic from the client IP

sudo ufw allow from 10.224.24.5 to any port nfs


Step 2: Setup the NFS Client

2.1)  Switch to the root account to make system changes.

sudo su

2.2) Install the necessary packages for the nfs client.

sudo apt update

sudo apt install nfs-common

2.3) Create the nfs client directory

sudo mkdir -p /lorem/ipsum/fs_local

2.4) Mount the client directory with the server directory

sudo mount 10.224.58.143:/lorem/ipsum/nfs_server /lorem/ipsum/fs_local

And that’s it! Set up a new file system destination in Ingest Actions with the nfs-client path (e.g. - /lorem/ipsum/fs_local). Any rulesets that write to the local client should now feed into the NFS server directory as well.

image 5.png

Conclusion


This new feature further enables data optimization and data tiering use cases for on-prem and hybrid customers by expanding choice in output destinations. We hope you take advantage of this new capability and share with us any
feedback or feature requests. Happy Splunking!

Curious about Ingest Actions capabilities and best practices?

Get Updates on the Splunk Community!

Building Reliable Asset and Identity Frameworks in Splunk ES

 Accurate asset and identity resolution is the backbone of security operations. Without it, alerts are ...

Cloud Monitoring Console - Unlocking Greater Visibility in SVC Usage Reporting

For Splunk Cloud customers, understanding and optimizing Splunk Virtual Compute (SVC) usage and resource ...

Automatic Discovery Part 3: Practical Use Cases

If you’ve enabled Automatic Discovery in your install of the Splunk Distribution of the OpenTelemetry ...