Hi,
I have a few queries regarding data ingestion from a .csv file. I am interested in knowing the following:
1. What is the most optimal way to bring the data from a .csv file into Splunk?
2. Are there any pre-requisites to be satisfied before indexing a .csv file?
3. Are there any limitations in indexing data from a .csv file?
4. Are there any restrictions in indexing data from a .csv file (maximum file size allowed, maximum rows or maximum columns that can be placed in a single .csv file, maximum number of the file allowed, etc.)
5. Is there any Splunk documentation available about this requirement? If so, please share the link for the same.
Thanks much!
Thanks Giuseppe!
Hi @ramganeshn,
good for you, see next time!
If this answer solves your need, please accept one answer for the other people of Community, or tell me hot I can help you more.
Ciao and happy splunking
Giuseppe
P.S.: Karma Points are appreciated 😉
Hi @ramganeshn,
csv files are one of the most used data souces in Splunk!
Answering to your questions:
1)
see monitor files and directory input and use for your csv files the sourcetype "csv" or use a your own custom sourcetype where there's the option INDEXED_EXTRACTIONS=CSV
if you try to ingest one csv file using the guided UI procedure [Settings -- Add Data-- upload files] you can find an help in sourcetype definition.
2)
there isn't any prerequisite, my hint is only to avoid field names with spaces or special chars,
3)
there isn't any limitation not already present in the csv file itself, maybe the limit of 500 MB but a csv file of 500 mg is a mad idea!
4)
there isn't any prerequisite, my hint is only to avoid field names with spaces or special chars,
5)
https://hurricanelabs.com/splunk-tutorials/ingesting-a-csv-file-into-splunk/
https://docs.splunk.com/Documentation/Splunk/latest/Data/Getstartedwithgettingdatain
Ciao.
Giuseppe