- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello everyone,
I have the attached file that is generated every night through my client's internal system and I need to index the information to collect metrics.
I need these files to be indexed based on their date in the file name.
Ex: The name of the file generated by the system is (qtd_ramal_diario_04042020.txt, qtd_ramal_diario_05042020.txt, etc.), so I need it to be indexed according to the time in the file name.
I need to extract the information that is between ";" in separate fields with the names (Field1, Field2, Field3) respectively.
Remembering that this file is variable, there are days that generate many lines and others do not.
FIELD1 FIELD2 FIELD3
77111010; 8; 614
77111812; 1; 106
77115070; 1; 58
70666287; 4; 171
70662245; 12; 708
77196074; 23; 1439
Is there a way to do this with Splunk?
Below is an example of the generated log:
78122960;2; 132
55002801;3; 279
8068256;8; 466
80661008;4; 134
55258888; 21;1843
76283160;1;25
55735555; 15;1027
55191240;1; 267
80662176;2; 249
790965034;3;93
55159608;1;20
80668021;1;19
76282680;2; 154
80664441;5; 536
71172794;1;28
55196157; 16;1208
55192425;3; 347
55196091;1;23
55192404;1;71
55196032; 24; 996
55196553;2;78
55196040;4;1087
55196426;1; 152
78111816;2; 157
78111847;1;30
78111815;6; 429
78111814;3; 233
55021902;2; 278
55034140;4; 159
550364331;1;80
550561127;2;78
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
props.conf
[delim_csv]
DATETIME_CONFIG = NONE
FIELD_DELIMITER = ;
FIELD_NAMES = field1,field2,field3
LINE_BREAKER = ([\r\n]+)
NO_BINARY_CHECK = 1
SHOULD_LINEMERGE = 0
TRANSFORMS-time = timestampeval
pulldown_type = 1
disabled = false
transforms.conf
[timestampeval]
INGEST_EVAL = _time=strptime(replace(source,".*?(\d+)\.txt","\1"),"%d%m%Y")
Please tell me.
1. Why don't you provide necessary information from the beginning?
2. I have provided reference materials, but did not understand where?
Occasionally, I think the questioner may not be asking for a solution.
For example
People who just say they don't work
People who do not provide the information needed to create a query
Please answer me because I want to solve my question.
| makeresults
| eval _raw="78122960;2; 132
55002801;3; 279
8068256;8; 466
80661008;4; 134
55258888; 21;1843
76283160;1;25
55735555; 15;1027
55191240;1; 267
80662176;2; 249
790965034;3;93
55159608;1;20
80668021;1;19
76282680;2; 154
80664441;5; 536
71172794;1;28
55196157; 16;1208
55192425;3; 347
55196091;1;23
55192404;1;71
55196032; 24; 996
55196553;2;78
55196040;4;1087
55196426;1; 152
78111816;2; 157
78111847;1;30
78111815;6; 429
78111814;3; 233
55021902;2; 278
55034140;4; 159
550364331;1;80
550561127;2;78"
| multikv noheader=t
| foreach *
[ eval <<FIELD>> = trim('<<FIELD>>')]
| rename Column_* as FIELD*
| fields - _* linecount
you can do it by search.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
props.conf
[delim_csv]
DATETIME_CONFIG = NONE
FIELD_DELIMITER = ;
FIELD_NAMES = field1,field2,field3
LINE_BREAKER = ([\r\n]+)
NO_BINARY_CHECK = 1
SHOULD_LINEMERGE = 0
TRANSFORMS-time = timestampeval
pulldown_type = 1
disabled = false
transforms.conf
[timestampeval]
INGEST_EVAL = _time=strptime(replace(source,".*?(\d+)\.txt","\1"),"%d%m%Y")
Please tell me.
1. Why don't you provide necessary information from the beginning?
2. I have provided reference materials, but did not understand where?
Occasionally, I think the questioner may not be asking for a solution.
For example
People who just say they don't work
People who do not provide the information needed to create a query
Please answer me because I want to solve my question.
| makeresults
| eval _raw="78122960;2; 132
55002801;3; 279
8068256;8; 466
80661008;4; 134
55258888; 21;1843
76283160;1;25
55735555; 15;1027
55191240;1; 267
80662176;2; 249
790965034;3;93
55159608;1;20
80668021;1;19
76282680;2; 154
80664441;5; 536
71172794;1;28
55196157; 16;1208
55192425;3; 347
55196091;1;23
55192404;1;71
55196032; 24; 996
55196553;2;78
55196040;4;1087
55196426;1; 152
78111816;2; 157
78111847;1;30
78111815;6; 429
78111814;3; 233
55021902;2; 278
55034140;4; 159
550364331;1;80
550561127;2;78"
| multikv noheader=t
| foreach *
[ eval <<FIELD>> = trim('<<FIELD>>')]
| rename Column_* as FIELD*
| fields - _* linecount
you can do it by search.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you for your help.
This file is generated every day, what would the configuration of props.conf look like?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
use FIELD_DELIMITER
in props.conf
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@to4kawa , I configured the structure below using props.conf and transform.conf, however, the following points did not work:
My file name maintains a structure "qtd_ramal_diario_DDMMAAAA.txt", for example, the file "qtd_ramal_diario_05042020.txt" needs to be indexed in Splunk on 05/05/2020, how to select how to use this configuration file?
- Another point I need that all the lines of my file are indexed to each of the events in Splunk, sending the complete file for analysis.
For example, if my file has 200 lines, it needs to be 200 events in Splunk.
**props.conf**
[linux-nice]
REPORT-fields = commafields
**transforms.conf**
[commafields]
DELIMS = ";"
FIELDS = field1, field2, field3
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
