Deployment Architecture

Ingest large number of csv files

msplunk33
Path Finder

I have a backlog of huge number of .csv file skipped by the UF need to be ingested manually to back fill. What is the easy and the best method. If I manually ingest from the search head will the transform and pros conf in the hf and indexers will take effect.

Labels (1)
Tags (1)
0 Karma
1 Solution

ekenne06
Path Finder

Yes, that will capture all the .csv files and process them in the oneshot.

View solution in original post

0 Karma

ekenne06
Path Finder

Check out this article 

https://dev.splunk.com/enterprise/docs/devtools/python/sdk-python/howtousesplunkpython/howtogetdatap...

 

particularly the section :  "To Add data directly to an index" 

 

You could write a quick python script to input the various csv files you have. This should go through your HF/Indexers so your transforms are added properly. However, i'm pretty sure uploading data on the searchhead should apply as well. 

 

You can also use the oneshot command:

/opt/splunk/bin/splunk add oneshot "C:\csv\test.csv" -sourcetype csv -index csv_index -source test -auth admin:changeme

msplunk33
Path Finder

 

 

/opt/splunk/bin/splunk add oneshot "C:\csv\*.csv" -sourcetype csv -index csv_index -source test -auth admin:changeme

 

Can I use  *.csv to upload all .csv files in this folder in oneshot ?

0 Karma

ekenne06
Path Finder

Yes, that will capture all the .csv files and process them in the oneshot.

View solution in original post

0 Karma
Did you miss .conf21 Virtual?

Good news! The event's keynotes and many of its breakout sessions are now available online, and still totally FREE!