I have a problem that is similar to this topic :
http://answers.splunk.com/answers/188776/events-are-not-properly-split.html
My log looks like this :
Wed Jul 30 02:41:12 TAIST 2015
runstats on table TABLE1
DB20000I The RUNSTATS command completed successfully.
Wed Jul 30 02:45:12 TAIST 2015
runstats on table TABLE2
SQLERROR : ... error message
Wed Jul 30 02:47:30 TAIST 2015
runstats on table TABLE3
DB20000I The RUNSTATS command completed successfully.
I want to group the three line into one event , so I could know the check status for each table.
but I find the SPLUNK will not wait the last line and group the event correctly , it will index the event as soon as possible.
I means , Splunk will group the first two line into one group , and the third line is another "orphan" event, because the third line is usually being written after 2~3 seconds.
My props.conf setting :
[reorg_out]
SHOULD_LINEMERGE = true
BREAK_ONLY_BEFORE_DATE = true
TIME_FORMAT = %a %b %d %H:%M:%S TAIST %Y
I had tried a lot of different settings about LINE MERGE , like : BREAK_ONLY_BEFORE , LINE_BREAK , MUST_NOT_BREAK_AFTER ... etc
It is not working...
What should I do to tell Splunk wait the last log and group the multiline event correctly ?
You need to use this inputs.conf
setting:
time_before_close = <integer>
* Modtime delta required before Splunk can close a file on EOF.
* Tells the system not to close files that have been updated in past <integer> seconds.
* Defaults to 3.
It seems someone has the same issue, but still can't find the answer for this...
http://answers.splunk.com/answers/207258/is-there-a-way-to-tell-splunk-how-long-to-wait-for.html
I know writing scripts for those files might be the solution.
But it will make the things complicated and not easy to maintain in the future.
Anyone has the suggestion ?
I used "time_before_close" setting in my pervious test.
Unfortunately , it's still not working.
What setting did you use? I would go as high as 10 seconds. If you cannot make this work, then the only other thing I can think to do is to create your own pre-processing script to act as intermediary and send the events form the original file to another file (with Splunk monitoring the second one) in bundled batches.