My apologies in advance for having to ask this question again but I did not get a definitive answer my first time.
I am struggling to get timestamp extraction to work for CSV files.
First, a bit about my setup. The CSV files are being processed by a Universal Forwarder and then the data is sent off to the indexer.
Here is the header line and the first line of data from the csv source:
"Estimated","462819316490","050506831222","LineItem","Amazon Elastic Compute Cloud","840814","855132","191235","BoxUsage","RunInstances","us-east-1a","N","$0.065 per M1 Standard Small (m1.small) Linux/UNIX instance-hour (or partial hour)","2012-12-01 00:00:00","2012-12-01 01:00:00","23.00000000","0.0650000000","1.49500000","0.0650000000","1.49500000"
On the universal forwarder, I set a custom sourcetype,the props.conf file
[source::/var/log/billing/462819316490-aws-billing-detailed-line-items-2*]
sourcetype = aws-billing-detailed
CHECK_METHOD = endpoint_md5
SHOULD_LINEMERGE = false
MAX_TIMESTAMP_LOOKAHEAD = 0
TIME_FORMAT = %Y-%m-%d %H:%M:%S
I
The desired behavior would be that Splunk sets the timestamp to be the first of the two time columns in the csv data. (ie, "2012-12-01 00:00:00")
The problem is that Splunk is still setting the timestamp to the file date.
Any guidance would be greatly appreciated.
Jon
In my case I'm eliminating header using transforms.conf before it indexing . So still can I able to capture timestamp from the csv file from each row. your answers could help me a lot.
Props.conf
TRANSFORMS-eliminate_header = eliminate_header
INDEXED_EXTRACTIONS = CSV
FIELD_DELIMITER = ,
TZ=UTC
TIMESTAMP_FIELDS = Date,Time
HEADER_FIELD_LINE_NUMBER = 1
Transforoms.conf
[eliminate_header]
REGEX = "Date"|"Time"|"Action"|"Category Name"|"Localized Country"|"Policy Name"|"User"|"Workstation"|"Domain"|"Protocol"|"Query"
DEST_KEY = queue
FORMAT = nullQueue
Hello JCBrendsel,
I'm trying to add a CSV file to splunk and I'm also getting the same behaviour as you described above. Have you been able to solve it? How?
Thanks, regards,
Marcus
There might not be anything wrong with what you are doing - remember that Splunk indexes the data just once. If you change props.conf
, it will not change existing data - only future data. To change existing data, you will need to remove it from the Splunk index and reindex it.
But here is another way to do things. In my example below, I don't use MAX_TIMESTAMP_LOOKAHEAD = 0
. Also, there is no need to set the CHECK_METHOD
. Finally, I assume that the number of fields (ie, commas) is always the same.
[source::/var/log/billing/462819316490-aws-billing-detailed-line-items-2*]
sourcetype = aws-billing-detailed
SHOULD_LINEMERGE = false
TIME_FORMAT = %Y-%m-%d %H:%M:%S
TIME_PREFIX=(?:.*?,){13}"
MAX_TIMESTAMP_LOOKAHEAD = 20
BTW, I counted 13 commas.
Yes, I was aware that it splunk only indexes the file once. In my case, the billing file is downloaded in full each time it is updated so is completely reindexed if the md5 has changed.
As for your other changes, they are merely different ways of stating the same thing and dont address the issue. (Just to be sure, I did try them to no effect.
Im still stumped.
In my case I'm eliminating header using transforms.conf before it indexing . So still can I able to capture timestamp from the csv file from each row. your answers could help me a lot.
Props.conf
TRANSFORMS-eliminate_header = eliminate_header
INDEXED_EXTRACTIONS = CSV
FIELD_DELIMITER = ,
TZ=UTC
TIMESTAMP_FIELDS = Date,Time
HEADER_FIELD_LINE_NUMBER = 1
Transforoms.conf
[eliminate_header]
REGEX = "Date"|"Time"|"Action"|"Category Name"|"Localized Country"|"Policy Name"|"User"|"Workstation"|"Domain"|"Protocol"|"Query"
DEST_KEY = queue
FORMAT = nullQueue