Getting Data In

props.conf config for line breaking

ssaenger
Communicator

Hi All,

I am having problems splitting lines of a log file.
the log entry is below;

[DEBUG 2019-09-26 09:15:57:765] Logger Proxy STARTED
[DEBUG 2019-09-26 09:15:57:765] Logger Servlet Called (13024624) times
[DEBUG 2019-09-26 09:15:57:765] Logger SetResponseDefaults
[FATAL 2019-09-26 09:15:57:765] Logger Proxy - Illegal or missing SubscriberId

below is my props.conf file entry

[jams_log]
SHOULD_LINEMERGE = true
BREAK_ONLY_BEFORE_DATE = false
BREAK_ONLY_BEFORE = ^[\D{5}\s\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}.\d{3}]
MAX_TIMESTAMP_LOOKAHEAD = 31
TIME_PREFIX = ^

I thought it was because i did not have TIME_FORMAT, however this did not work either.

any help would be much appreciated.

0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi ssaenger,
at first I suggest to test your props.conf using the guided web interface.
Than, did you tried SHOULD_LINEMERGE=false?
Then I see that the TIME_PREFIX isn't correct, so try something like this:

[jams_log]
SHOULD_LINEMERGE = true
LINE_BREAKER = \[\w+\s+\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2}.\d{3}\]
MAX_TIMESTAMP_LOOKAHEAD = 29
TIME_PREFIX = ^\[\w+\s+

Bye.
Giuseppe

View solution in original post

gcusello
SplunkTrust
SplunkTrust

Hi ssaenger,
at first I suggest to test your props.conf using the guided web interface.
Than, did you tried SHOULD_LINEMERGE=false?
Then I see that the TIME_PREFIX isn't correct, so try something like this:

[jams_log]
SHOULD_LINEMERGE = true
LINE_BREAKER = \[\w+\s+\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2}.\d{3}\]
MAX_TIMESTAMP_LOOKAHEAD = 29
TIME_PREFIX = ^\[\w+\s+

Bye.
Giuseppe

ssaenger
Communicator

Thank you Giuseppe,

Yes, from looking at your answer I understand my mistake - slowly learning 🙂

thanks,
Steve

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Thanks for the Memories! Splunk University, .conf25, and our Community

Thank you to everyone in the Splunk Community who joined us for .conf25, which kicked off with our iconic ...

Data Persistence in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. What happens if the OpenTelemetry collector ...

Introducing Splunk 10.0: Smarter, Faster, and More Powerful Than Ever

Now On Demand Whether you're managing complex deployments or looking to future-proof your data ...