Greetings everyone. I am working to try and aggregate .csv data from a number of sources. Initially its just a few devices but the number will be millions when the project is completed.
For now, I just need to get our test lab working with some essential infrastructure equipment. All of the equipment is configured to regularly export .csv files via FTP. I would like to set up a directory on the test server I have set up to receive these files, and have splunk monitor the directory. I'm pretty sure this is possible, but it leads me to my next question.
If I have numerous different devices all dumping files to the same directory, how does Splunk tell what data came from what device?
I'd suggest you set up sub-directories underneath the main directory, one for each system dumping their .csv files there.
This way, when setting up Splunk to digest those .csv files, it can extract the host from the sub-directory name using the "Segment on path" option in the input setup.
You can get more information from here http://www.splunk.com/base/Documentation/latest/Admin/Setadefaulthostforaninput
Thank you for your help, I really appreciate it. Both of these solutions work, and I'm going to set up a hierarchical structure just to keep things organized. Thanks!
host_segmentin inputs.conf to assign hostnames.
Take a look at these doc entries also: