I am currently adding a new CSV file every day as a new index in Splunk for some requirement.
Issue is: The event time stamp is being picked as the time when we upload the CSV to Splunk, while we want to use the starting field in the CSV, say for example, field reportrundt as the event time stamp.
Format for reportrundt is:
However, while setting the Splunk index field while adding the new index, based on this CSV file, Splunk is assigning the event time stamp as the date-time of upload and NOT the field (reportrundt) which is what we want Splunk to base its event timestamp on...
How are you setting up this input?
If you are using INDEXED_EXTRACTIONS = CSV in props.conf, then the way to specify the timestamp is
[yoursourcetypehere] INDEXED_EXTRACTIONS = CSV TIMESTAMP_FIELDS = reportrundt TIME_FORMAT = %d.%m.%Y %H:%M:%S
Although I am not positive that Splunk will actually look at the TIME_FORMAT field...
If you are not using indexed extractions, then props.conf will be different. If you could show the header line (and maybe one line of the data, obfuscated), that would really help. But even this much might work:
[yoursourcetypehere] TIME_FORMAT = %d.%m.%Y %H:%M:%S
Personally, I tend to avoid using indexed extractions, and would do this instead:
# in props.conf [yoursourcetypehere] TIME_FORMAT = %d.%m.%Y %H:%M:%S REPORT-ext-fields = extract-CSV-fields # in transforms.conf [extract-CSV-fields] DELIMS = "," FIELDS = fieldName1, fieldName2, fieldName3 #copied from csv file heading with quotation marks as needed
What is the rationale for creating a new index every day? I want to say "this is a really bad idea" - because it is usually a really bad idea, but perhaps there is an important reason for doing it this way.