Getting Data In

Multi line NFS monitoring file cause multiple events to be created

jdagenais
Explorer

Hello,

We are monitoring application files that are mounted as read-only NFS drives, and sometimes multi-lines messages are processed as multiple events.

These are the format of the multi lines messages, when processes as single events.

Event #1
[LOG|INFO|2010 December 07, 09:59:43 (208)|TRACK_MD|RMI TCP Connection(61072)-100.200.10.22|100.200.10.21 (host.net.com)]
Saving MD:  DataInfo.1891568 LAST 12/7/10 10:59:31.000 AM EST       FROM:   100.200.10.20 (host.net.com)    @ 12/7/10 9:59:43.208 AM EST
[END]

Event #2
[LOG|ERROR|2010 December 07, 09:59:55 (8)|SQL|RMI TCP Connection(61294)-100.200.10.21|100.200.10.21 (host.net.com)]
Trying to saves duplicate points in data filed 1287/OBRC on 12/7/10 9:59:48.000 AM EST
[END]

This is the format, when they are processes as multiple events

Event #1
[LOG|INFO|2010 December 07, 09:59:43 (208)|TRACK_MD|RMI TCP Connection(61072)-100.200.10.22|100.200.10.21 (host.net.com)]

Event #2
Saving MD:  DataInfo.1891568 LAST 12/7/10 10:59:31.000 AM EST       FROM:   100.200.10.20 (host.net.com)    @ 12/7/10 9:59:43.208 AM EST
    [END]

Is there any ways to resolve this multi line nfs issue?

We tried different options such as:

BREAK_ONLY_BEFORE=\[LOG\|

MAX_TIMESTAMP_LOOKAHEAD=50

Thanks, Jean

jdagenais
Explorer

Thanks for ths suggestion!

The problem is related to using NFS with high latency, and applications taking a long time (e.g. seconds) to write a complete message.

For example, the application can start to write the first part, and then 5 seconds later write the second part which cause the full messages to be written.

I found this post which explain a solution that seems to work quite well:

http://answers.splunk.com/questions/9750/multline-events-with-pauses-between-lines

This is the description of the change:

time_before_close = * Modtime delta required before Splunk can close a file on EOF. * Tells the system not to close files that have been updated in past seconds. * Defaults to 3.

For my test environment, I am using a value of time_before_close = 60

Thanks, Jean

ziegfried
Influencer

You could try to use a LINE_BREAKER instead of line merging:

[<your sourcetype>]
SHOULD_LINEMERGE = false
LINE_BREAKER = ([\r\n]+)\[LOG\|
MAX_TIMESTAMP_LOOKAHEAD = 50

or

[<your sourcetype>]
SHOULD_LINEMERGE = false
LINE_BREAKER = \[END\](\s+)
MAX_TIMESTAMP_LOOKAHEAD = 50
0 Karma
Get Updates on the Splunk Community!

Splunk Observability Cloud's AI Assistant in Action Series: Auditing Compliance and ...

This is the third post in the Splunk Observability Cloud’s AI Assistant in Action series that digs into how to ...

Splunk Community Badges!

  Hey everyone! Ready to earn some serious bragging rights in the community? Along with our existing badges ...

What You Read The Most: Splunk Lantern’s Most Popular Articles!

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...