Deployment Architecture

Ingest large number of csv files

msplunk33
Path Finder

I have a backlog of huge number of .csv file skipped by the UF need to be ingested manually to back fill. What is the easy and the best method. If I manually ingest from the search head will the transform and pros conf in the hf and indexers will take effect.

Labels (1)
Tags (1)
0 Karma
1 Solution

ekenne06
Path Finder

Yes, that will capture all the .csv files and process them in the oneshot.

View solution in original post

0 Karma

ekenne06
Path Finder

Check out this article 

https://dev.splunk.com/enterprise/docs/devtools/python/sdk-python/howtousesplunkpython/howtogetdatap...

 

particularly the section :  "To Add data directly to an index" 

 

You could write a quick python script to input the various csv files you have. This should go through your HF/Indexers so your transforms are added properly. However, i'm pretty sure uploading data on the searchhead should apply as well. 

 

You can also use the oneshot command:

/opt/splunk/bin/splunk add oneshot "C:\csv\test.csv" -sourcetype csv -index csv_index -source test -auth admin:changeme

msplunk33
Path Finder

 

 

/opt/splunk/bin/splunk add oneshot "C:\csv\*.csv" -sourcetype csv -index csv_index -source test -auth admin:changeme

 

Can I use  *.csv to upload all .csv files in this folder in oneshot ?

0 Karma

ekenne06
Path Finder

Yes, that will capture all the .csv files and process them in the oneshot.

0 Karma
Get Updates on the Splunk Community!

AppDynamics Summer Webinars

This summer, our mighty AppDynamics team is cooking up some delicious content on YouTube Live to satiate your ...

SOCin’ it to you at Splunk University

Splunk University is expanding its instructor-led learning portfolio with dedicated Security tracks at .conf25 ...

Credit Card Data Protection & PCI Compliance with Splunk Edge Processor

Organizations handling credit card transactions know that PCI DSS compliance is both critical and complex. The ...