How to this the following file based on trigger time and elapsed time?
"File name","AUTO_231126_012051_0329.CSV","V2.10"
"Title comment","T1"
"Trigger Time","'23-11-26 01:20:51.500"
"CH","U1-2","Event"
"Mode","Voltage"
"Range","200mV"
"UnitID",""
"Comment",""
"Scaling","ON"
"Ratio","+1.00000E+02"
"Offset","+0.00000E+00"
"Time","U1-2[]","Event"
+0.000000000E+00,+2.90500E+00,0
+1.000000000E-01,+1.45180E+01,0
+2.000000000E-01,+7.93600E+00,0
+3.000000000E-01,+3.60100E+00,0
+4.000000000E-01,+3.19100E+00,0
+5.000000000E-01,+3.17300E+00,0
+6.000000000E-01,+3.17300E+00,0
+7.000000000E-01,+3.18400E+00,0
+8.000000000E-01,+3.19400E+00,0
+9.000000000E-01,+3.16500E+00,0
+1.000000000E+00,+3.16000E+00,0
Hi @kyokei,
The "Trigger Time" line will be lost to subsequent events after it's either discarded as a header or broken into an event. If you have the ability to manipulate the source file name, you can add the fractional seconds value the file name and reference the source when extracting timestamps:
AUTO_231126_012051_500_0329.CSV
With that change made, you can, for example, combine INDEXED_EXTRACTIONS with TRANSFORMS and INGEST_EVAL to extract CSV fields and set _time for each event:
# inputs.conf
[monitor:///path/to/AUTO_*.CSV]
index = main
sourcetype = sensor_csv
# props.conf
[sensor_csv]
# disable default timestamp extraction and suppress errors
DATETIME_CONFIG = CURRENT
# enable indexed extractions for CSV files
INDEXED_EXTRACTIONS = CSV
# use header line 12 for field names:
# "Time","U1-2[]","Event"
# these will be "cleaned" by Splunk:
# Time
# U1_2
# Event
HEADER_FIELD_LINE_NUMBER = 12
# execute a transform to extract the _time value
TRANSFORMS-sensor_csv_time = sensor_csv_time
# transforms.conf
[sensor_csv_time]
INGEST_EVAL = _time:=strptime(replace(source, ".*(AUTO_\\d{6}_\\d{6}_\\d{3}).*", "\\1"), "AUTO_%y%m%d_%H%M%S_%N")+tonumber(coalesce(replace(_raw, "^(?!\")([^,]+),.*", "\\1"), 0))
``` search ```
index=main sourcetype=sensor_csv
| table _time source Time U1_2 Event
_time source Time U1_2 Event
2023-11-26 01:20:52.500 AUTO_231126_012051_500_0329.CSV +1.000000000E+00 +3.16000E+00 0
2023-11-26 01:20:52.400 AUTO_231126_012051_500_0329.CSV +9.000000000E-01 +3.16500E+00 0
2023-11-26 01:20:52.300 AUTO_231126_012051_500_0329.CSV +8.000000000E-01 +3.19400E+00 0
2023-11-26 01:20:52.200 AUTO_231126_012051_500_0329.CSV +7.000000000E-01 +3.18400E+00 0
2023-11-26 01:20:52.100 AUTO_231126_012051_500_0329.CSV +6.000000000E-01 +3.17300E+00 0
2023-11-26 01:20:52.000 AUTO_231126_012051_500_0329.CSV +5.000000000E-01 +3.17300E+00 0
2023-11-26 01:20:51.900 AUTO_231126_012051_500_0329.CSV +4.000000000E-01 +3.19100E+00 0
2023-11-26 01:20:51.800 AUTO_231126_012051_500_0329.CSV +3.000000000E-01 +3.60100E+00 0
2023-11-26 01:20:51.700 AUTO_231126_012051_500_0329.CSV +2.000000000E-01 +7.93600E+00 0
2023-11-26 01:20:51.600 AUTO_231126_012051_500_0329.CSV +1.000000000E-01 +1.45180E+01 0
2023-11-26 01:20:51.500 AUTO_231126_012051_500_0329.CSV +0.000000000E+00 +2.90500E+00 0
Hi @kyokei,
The "Trigger Time" line will be lost to subsequent events after it's either discarded as a header or broken into an event. If you have the ability to manipulate the source file name, you can add the fractional seconds value the file name and reference the source when extracting timestamps:
AUTO_231126_012051_500_0329.CSV
With that change made, you can, for example, combine INDEXED_EXTRACTIONS with TRANSFORMS and INGEST_EVAL to extract CSV fields and set _time for each event:
# inputs.conf
[monitor:///path/to/AUTO_*.CSV]
index = main
sourcetype = sensor_csv
# props.conf
[sensor_csv]
# disable default timestamp extraction and suppress errors
DATETIME_CONFIG = CURRENT
# enable indexed extractions for CSV files
INDEXED_EXTRACTIONS = CSV
# use header line 12 for field names:
# "Time","U1-2[]","Event"
# these will be "cleaned" by Splunk:
# Time
# U1_2
# Event
HEADER_FIELD_LINE_NUMBER = 12
# execute a transform to extract the _time value
TRANSFORMS-sensor_csv_time = sensor_csv_time
# transforms.conf
[sensor_csv_time]
INGEST_EVAL = _time:=strptime(replace(source, ".*(AUTO_\\d{6}_\\d{6}_\\d{3}).*", "\\1"), "AUTO_%y%m%d_%H%M%S_%N")+tonumber(coalesce(replace(_raw, "^(?!\")([^,]+),.*", "\\1"), 0))
``` search ```
index=main sourcetype=sensor_csv
| table _time source Time U1_2 Event
_time source Time U1_2 Event
2023-11-26 01:20:52.500 AUTO_231126_012051_500_0329.CSV +1.000000000E+00 +3.16000E+00 0
2023-11-26 01:20:52.400 AUTO_231126_012051_500_0329.CSV +9.000000000E-01 +3.16500E+00 0
2023-11-26 01:20:52.300 AUTO_231126_012051_500_0329.CSV +8.000000000E-01 +3.19400E+00 0
2023-11-26 01:20:52.200 AUTO_231126_012051_500_0329.CSV +7.000000000E-01 +3.18400E+00 0
2023-11-26 01:20:52.100 AUTO_231126_012051_500_0329.CSV +6.000000000E-01 +3.17300E+00 0
2023-11-26 01:20:52.000 AUTO_231126_012051_500_0329.CSV +5.000000000E-01 +3.17300E+00 0
2023-11-26 01:20:51.900 AUTO_231126_012051_500_0329.CSV +4.000000000E-01 +3.19100E+00 0
2023-11-26 01:20:51.800 AUTO_231126_012051_500_0329.CSV +3.000000000E-01 +3.60100E+00 0
2023-11-26 01:20:51.700 AUTO_231126_012051_500_0329.CSV +2.000000000E-01 +7.93600E+00 0
2023-11-26 01:20:51.600 AUTO_231126_012051_500_0329.CSV +1.000000000E-01 +1.45180E+01 0
2023-11-26 01:20:51.500 AUTO_231126_012051_500_0329.CSV +0.000000000E+00 +2.90500E+00 0
The above response assumes Time is an absolute offset from Trigger Time and not the interval between samples.
You can also extract the date and time from the source file name using a custom datetime.xml configuration, but INGEST_EVAL is easier to maintain.
Hi @kyokei,
use the Add Data function, that you can find at [Settings > Add Data], that guides you in the correct sourcetype configuration.
For the timestamp, if you want to use as timestamp the Trigger Time, you could use:
[your_sourcetype]
TIME_PREFIX = \"\Trigger Time\",\"\'
TIME_FORMAT = %y-%m-%d %H:%M:%S.%3N
Ciao.
Giuseppe
Hi @gcusello ,
With your help i am able to extract the timestamp,
_time = 23-11-26 01:20:51.500 AM but _time is same for each event.
23-11-26 01:20:51.500 AM, +0.000000000E+00,+2.90500E+00,0
23-11-26 01:20:51.500 AM,+1.000000000E-01,+1.45180E+01,0
23-11-26 01:20:51.500 AM,+2.000000000E-01,+7.93600E+00,0
23-11-26 01:20:51.500 AM,+3.000000000E-01,+3.60100E+00,0
23-11-26 01:20:51.500 AM,+4.000000000E-01,+3.19100E+00,0
23-11-26 01:20:51.500 AM,+5.000000000E-01,+3.17300E+00,0
How can i achieve below format during data ingest?
23-11-26 01:20:51.500 AM, +2.90500E+00,0
23-11-26 01:20:51.600 AM, +1.45180E+01,0
23-11-26 01:20:51.700 AM, +7.93600E+00,0
23-11-26 01:20:51.800 AM, +3.60100E+00,0
23-11-26 01:20:51.900 AM, +3.19100E+00,0
23-11-26 01:20:52.000 AM, +3.17300E+00,0
Basically, add those duration under time to the trigger time to create _time.
"Time","U1-2[]","Event"
+0.000000000E+00,+2.90500E+00,0
+1.000000000E-01,+1.45180E+01,0
+2.000000000E-01,+7.93600E+00,0
+3.000000000E-01,+3.60100E+00,0
+4.000000000E-01,+3.19100E+00,0
Thanks a lot for your help.
Hi @kyokei,
this is another question, even if on the same data.
Anyway, if you want to discard a part of your events at index time, you have to use the SEDCMD command in your props.conf
[your_sourcetype]
SEDCMD = s/^([^,]*,)([^,]*,)(.*)/([^,]*,)()(.*)/g
Ciao.
Giuseppe