Deployment Architecture

Ingest large number of csv files

msplunk33
Path Finder

I have a backlog of huge number of .csv file skipped by the UF need to be ingested manually to back fill. What is the easy and the best method. If I manually ingest from the search head will the transform and pros conf in the hf and indexers will take effect.

Labels (1)
Tags (1)
0 Karma
1 Solution

ekenne06
Path Finder

Yes, that will capture all the .csv files and process them in the oneshot.

View solution in original post

0 Karma

ekenne06
Path Finder

Check out this article 

https://dev.splunk.com/enterprise/docs/devtools/python/sdk-python/howtousesplunkpython/howtogetdatap...

 

particularly the section :  "To Add data directly to an index" 

 

You could write a quick python script to input the various csv files you have. This should go through your HF/Indexers so your transforms are added properly. However, i'm pretty sure uploading data on the searchhead should apply as well. 

 

You can also use the oneshot command:

/opt/splunk/bin/splunk add oneshot "C:\csv\test.csv" -sourcetype csv -index csv_index -source test -auth admin:changeme

msplunk33
Path Finder

 

 

/opt/splunk/bin/splunk add oneshot "C:\csv\*.csv" -sourcetype csv -index csv_index -source test -auth admin:changeme

 

Can I use  *.csv to upload all .csv files in this folder in oneshot ?

0 Karma

ekenne06
Path Finder

Yes, that will capture all the .csv files and process them in the oneshot.

0 Karma
Get Updates on the Splunk Community!

Troubleshooting the OpenTelemetry Collector

  In this tech talk, you’ll learn how to troubleshoot the OpenTelemetry collector - from checking the ...

Adoption of Infrastructure Monitoring at Splunk

  Splunk's Growth Engineering team showcases one of their first Splunk product adoption-Splunk Infrastructure ...

Modern way of developing distributed application using OTel

Recently, I had the opportunity to work on a complex microservice using Spring boot and Quarkus to develop a ...