Splunk Search

Why are log events indexed from GZip archive with a specified source type missing extracted fields?

Engager

Hi there,

I have a custom source type (papertrail) that is a tab delimited source and have verified it works correctly. I manually imported a local directory of a month's worth of log data directly from a .tsv file - see the below screenshot:

good

Since that worked as expected, I then set up an AWS SQS-Based S3 import to move Papertrail's nightly archives into Splunk automatically. These archives however are daily gzipped archives. Splunk does index the gzip file, and says the source type is papertrail, but the fields aren't extracted like in the first picture. Any ideas?

bad

0 Karma
1 Solution

SplunkTrust
SplunkTrust

Hi @statmuse

I am not sure why you are getting this problem because I can't see any problem.

Are you using a sourcetype rename or something like that? Anything unusual in your indexes.conf ?

That said, If I had this problem I would personally fix it by moving away from INDEXED_EXTRACTIONS and just doing regular search time extractions. Splunk strength is in doing search time extractions so I always use those where possible. If you need a hand with this I would be happy to help.

All the best

View solution in original post

SplunkTrust
SplunkTrust

Hi @statmuse

I am not sure why you are getting this problem because I can't see any problem.

Are you using a sourcetype rename or something like that? Anything unusual in your indexes.conf ?

That said, If I had this problem I would personally fix it by moving away from INDEXED_EXTRACTIONS and just doing regular search time extractions. Splunk strength is in doing search time extractions so I always use those where possible. If you need a hand with this I would be happy to help.

All the best

View solution in original post

Engager

Nothing out of the ordinary in indexes.conf. I'm thinking that it might have something to do with the AWS add-on -

If you want to ingest custom logs other the natively supported AWS log types, you must set s3_file_decoder = CustomLogs. This lets you ingest custom logs into Splunk but does not parse the data. To process custom logs into meaningful events, you need to perform additional configurations in props.conf and transforms.conf to parse the collected data to meet your specific requirements.

Just need to find out what that is maybe?

Is there a way to do search time extractions that still uses that papertrail sourcetype so I didn't need to rename fields everytime, but use the pre-existing column names in the sourcetype?

0 Karma

SplunkTrust
SplunkTrust

Yes you dont need to change the souretype

Just put a extraction like this (expand it out)

EXTRACT-all = (?<id>[^\t]+)\t(?<generated_at>[^\t]+)\t(?<received_at>[^\t]+)\t(?<source_id>[^\t]+)\t

0 Karma

Engager

After some other research, came across https://www.hurricanelabs.com/blog/splunk-case-study-indexed-extractions-vs-search-time-extractions (linked from a colleague) that also advocated for search based extraction vs indexed. Going with your suggestion! Thanks!

0 Karma

SplunkTrust
SplunkTrust

Can you please share the props.conf setting for the sourcetype paper trail?

cheers, MuS

0 Karma

Engager

Yes, here you go:

[papertrail]
BREAK_ONLY_BEFORE_DATE =
DATETIME_CONFIG =
FIELD_DELIMITER = tab
FIELD_NAMES = id, generated_at, received_at, source_id, source_name, source_ip, facility_name, severity_name, program, message
HEADER_FIELD_DELIMITER = tab
INDEXED_EXTRACTIONS = tsv
KV_MODE = none
LINE_BREAKER = ([\r\n]+)
NO_BINARY_CHECK = true
SHOULD_LINEMERGE = false
category = Structured
description = papertrail archive format
disabled = false
pulldown_type = 1
0 Karma

Contributor

Can you set the search mode to verbose and check fields

0 Karma
State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!