Getting Data In

Why does event breaking sometimes occur in the middle of multiline events with my current configuration?

antessima
Explorer

We have a forwarder installed on a Linux server that forwards data from application log files to our indexer. On the forwarder, I have inputs.conf set up as follows:

[monitor:///usr/local/copient/logs/.../*_scanLog]
disabled = false
index = transactions
sourcetype = scanlog    

On the indexer, I have props.conf set up as follows:

[scanlog]
MAX_TIMESTAMP_LOOKAHEAD = 28
SHOULD_LINEMERGE = true
BREAK_ONLY_BEFORE = (NEW FILE HANDLE)
MAX_EVENTS = 2000
CHARSET = 
DATETIME_CONFIG =
NO_BINARY_CHECK = true
category = Custom
disabled = false
pulldown_type = true

The transactions each take between 0.3 and 0.4 seconds to finish in this particular environment, and about 50% of the test transactions I run get collected as a single event. The other 50%, however, break at some point in the middle. All of the transactions are identical, except for the aforementioned slight variance in processing times. Sometimes the events break after 13 out of the 202 lines in each transaction; sometimes they break much closer to the middle.

A sample of the transaction data:

3 - {24519} [1452798558.77883] xmlInquery beginning at January 14, 2016, 2:09 pm (NEW FILE HANDLE)
3 - {24519} [1452798558.77899] -------------------------------------------
3 - {24519} [1452798558.77903] termnum=3&n=54884070&mesgid=236&message=<Transaction><PosHeader><Discount>-1.28</Discount><Subtotal>2.50</Subtotal><MethodOfPayment>0600</MethodOfPayment><TransactionType>Online</TransactionType><TransactionID>54884070</TransactionID><StoreID>0001</StoreID><MemberID>SCRUBBED</MemberID><CardType>0</CardType><TerminalID>3</TerminalID><PosTimestamp>2016-01-14T14:09:00</PosTimestamp></PosHeader><Basket><LineItem><ItemID>SCRUBBED</ItemID><Qty>2.000</Qty><Total>2.50</Total></LineItem></Basket></Transaction>
3 - {24519} [1452798558.77991] Location Option [1] = 2 (0) SiteSpecificOptionValues
3 - {24519} [1452798558.78005] Setting BoxID was delayed to XML parsing time. 
3 - {24519} [1452798558.78107] lookup boxid returns 
3 - {24519} [1452798558.78111] No UniqueBoxID found, add new box location to boxlocations.
3 - {24519} [1452798558.78332] @promo_engine set boxid=15523, real boxid=3
3 - {24519} [1452798558.78355] Removed Display Mapping for Terminal 15523
3 - {24519} [1452798558.78378] search for lane type of 3, uniqueboxid = 15523
3 - {24519} [1452798558.78421] Unknown BoxID - creating YellowBoxes entry with default values
3 - {24519} [1452798558.78627] Created new yellowbox entry for terminal 15523 (4,3,0)
3 - {24519} [1452798558.78944] CardNum=SCRUBBED;CardType=0;LocationID=1124;BoxID=15523;localServerSerial=3;transnum=54884070
3 - {24519} [1452798558.91473] No instantwin trigger found.
3 - {24519} [1452798558.91675] Checking database for exclusion groups.
3 - {24519} [1452798558.91708] Checking System Wide Exclusion: EligMesg: 1, DiscMesg: 1, non-disPGID: 3 not-EliglePGID: 2
3 - {24519} [1452798558.91772] System Wide Exclusion (66):2 (67): 0 - Eligible: 1 , Discountable: 1
3 - {24519} [1452798558.91876] TransItemID=116
3 - {24519} [1452798558.92187] FOD: SCRUBBED, 0, 2 Items
3 - {24519} [1452798558.93282] hhpk=0 ,userid=SCRUBBED, cardstatus=1, cardtype=0, cardpk=4223918.
3 - {24519} [1452798558.93290] Found userid SCRUBBED in 0.010 seconds
3 - {24519} [1452798558.93694] Set hhPrimaryID=userid due to no hhprimaryid
3 - {24519} [1452798558.93702] Insert into users and cardids tables took 0.004 seconds
3 - {24519} [1452798558.93707] employee= ;CustomerStatusID=1;hhPrimaryID=SCRUBBED;

I tried setting time_before_close in the inputs.conf on the forwarder, but I found that 1) it still broke the first transaction in the middle, and 2) proceeded to block any further messages from the same log file until the time period set in time_before_close expired.

Any ideas?

1 Solution

somesoni2
Revered Legend

Give this property a try. This seems more related to your way of logging data (each transaction is a single events with multiple lines)

multiline_event_extra_waittime = [true|false]
*By default, Splunk Enterprise sends an event delimiter when (1) it reaches EOF of a file it monitors and (2) the last char it reads is a newline.
*In some cases, it takes time for all lines of a multiple-line event to arrive.
*Set to true to delay sending an event delimiter until the time that Splunk Enterprise closes the file, as defined by the time_before_close attribute, to allow all event lines to arrive.
*Default to false.

View solution in original post

somesoni2
Revered Legend

Give this property a try. This seems more related to your way of logging data (each transaction is a single events with multiple lines)

multiline_event_extra_waittime = [true|false]
*By default, Splunk Enterprise sends an event delimiter when (1) it reaches EOF of a file it monitors and (2) the last char it reads is a newline.
*In some cases, it takes time for all lines of a multiple-line event to arrive.
*Set to true to delay sending an event delimiter until the time that Splunk Enterprise closes the file, as defined by the time_before_close attribute, to allow all event lines to arrive.
*Default to false.

antessima
Explorer

I tried this just now and it does fix that particular problem, but I am noticing that the events are noticeably slower to show up on the indexer than before. Is there a way to have my cake and eat it too, i.e. tell the forwarder or indexer what the beginning and the end of the event are? Our transactions have the same ending sequences, so it is predictable...

0 Karma

somesoni2
Revered Legend

Try setting time_before_close as well to a number equal to max duration of a transaction.

You do have settings like these that may help as well.
MUST_NOT_BREAK_AFTER =
* When set and the current line matches the regular expression, Splunk does not break on any
subsequent lines until the MUST_BREAK_AFTER expression matches.
* Defaults to empty.

MUST_NOT_BREAK_BEFORE =
* When set and the current line matches the regular expression, Splunk does not break the
last event before the current line.
* Defaults to empty.

Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...