Getting Data In

How to merge a multiline event correctly ?

Path Finder

I have a problem that is similar to this topic :

My log looks like this :

Wed Jul 30 02:41:12 TAIST 2015
runstats on table TABLE1 
DB20000I  The RUNSTATS command completed successfully.

Wed Jul 30 02:45:12 TAIST 2015
runstats on table TABLE2
SQLERROR : ... error message

Wed Jul 30 02:47:30 TAIST 2015
runstats on table TABLE3
DB20000I  The RUNSTATS command completed successfully.

I want to group the three line into one event , so I could know the check status for each table.
but I find the SPLUNK will not wait the last line and group the event correctly , it will index the event as soon as possible.

I means , Splunk will group the first two line into one group , and the third line is another "orphan" event, because the third line is usually being written after 2~3 seconds.

My props.conf setting :

TIME_FORMAT = %a %b %d %H:%M:%S TAIST %Y

I had tried a lot of different settings about LINE MERGE , like : BREAK_ONLY_BEFORE , LINE_BREAK , MUST_NOT_BREAK_AFTER ... etc
It is not working...

What should I do to tell Splunk wait the last log and group the multiline event correctly ?

0 Karma

Esteemed Legend

You need to use this inputs.conf setting:

time_before_close = <integer>
* Modtime delta required before Splunk can close a file on EOF.
* Tells the system not to close files that have been updated in past <integer> seconds.
* Defaults to 3.
0 Karma

Path Finder

It seems someone has the same issue, but still can't find the answer for this...

I know writing scripts for those files might be the solution.
But it will make the things complicated and not easy to maintain in the future.
Anyone has the suggestion ?

0 Karma

Path Finder

I used "time_before_close" setting in my pervious test.
Unfortunately , it's still not working.

0 Karma

Esteemed Legend

What setting did you use? I would go as high as 10 seconds. If you cannot make this work, then the only other thing I can think to do is to create your own pre-processing script to act as intermediary and send the events form the original file to another file (with Splunk monitoring the second one) in bundled batches.

0 Karma
Get Updates on the Splunk Community!

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...

Get the T-shirt to Prove You Survived Splunk University Bootcamp

As if Splunk University, in Las Vegas, in-person, with three days of bootcamps and labs weren’t enough, now ...

Wondering How to Build Resiliency in the Cloud?

IT leaders are choosing Splunk Cloud as an ideal cloud transformation platform to drive business resilience,  ...