I have a backlog of huge number of .csv file skipped by the UF need to be ingested manually to back fill. What is the easy and the best method. If I manually ingest from the search head will the transform and pros conf in the hf and indexers will take effect.
Yes, that will capture all the .csv files and process them in the oneshot.
Check out this article
particularly the section : "To Add data directly to an index"
You could write a quick python script to input the various csv files you have. This should go through your HF/Indexers so your transforms are added properly. However, i'm pretty sure uploading data on the searchhead should apply as well.
You can also use the oneshot command:
/opt/splunk/bin/splunk add oneshot "C:\csv\test.csv" -sourcetype csv -index csv_index -source test -auth admin:changeme
/opt/splunk/bin/splunk add oneshot "C:\csv\*.csv" -sourcetype csv -index csv_index -source test -auth admin:changeme
Can I use *.csv to upload all .csv files in this folder in oneshot ?
Yes, that will capture all the .csv files and process them in the oneshot.