Getting Data In

Event does not break right

jesperbassoe
Explorer

Hi folks..

I have an issue where I can't get an event to break right.

The event looks like this

 

 ************************************
 2024.09.03.141001
 ************************************
 sqlplus -S -L swiftfilter/_REMOVED_@PPP @"long_lock_alert.sql"

TAG		  COUNT(*)
--------------- ----------
PPP_locks_count 	 0


TAG		  COUNT(*)
--------------- ----------
PPP_locks_count 	 0

 SUCCESS
 End Time: 2024.09.03.141006

 

Props looks like this:

 

[nk_pp_tasks]
SHOULD_LINEMERGE=false
LINE_BREAKER=End Time([^\*]+)
NO_BINARY_CHECK=true
TIME_FORMAT=%Y.%m.%d.%H%M%S
TIME_PREFIX=^.+[\r\n]\s
BREAK_ONLY_BEFORE_DATE = false

 

Outcome is this:

jesperbassoe_0-1725365910422.png

 

When the logfile is imported through 'Add Data' everything looks fine and the event has not been broken up in 3.

Any idees on how to make Splunk not break up the event ?

Labels (1)
0 Karma
1 Solution

jesperbassoe
Explorer

So it turns out the SQL doesn't write the entire event at once and Splunk therefore only reads part of the event.

It worked in our TEST because I dumped the log file and therefore the entire events were there.

The solution was :

multiline_event_extra_waittime = true
time_before_close = 10

View solution in original post

richgalloway
SplunkTrust
SplunkTrust

The existing props are discarding the End Time value because of the LINE_BREAKER setting.  LINE_BREAKER always throws out the text that matches the first capture group.

Try these settings.

[nk_pp_tasks]
SHOULD_LINEMERGE=false
LINE_BREAKER=End Time:[^\*]+?()
NO_BINARY_CHECK=true
TIME_FORMAT=%Y.%m.%d.%H%M%S
TIME_PREFIX=\*\*+
BREAK_ONLY_BEFORE_DATE = false

 

---
If this reply helps you, Karma would be appreciated.
0 Karma

PickleRick
SplunkTrust
SplunkTrust

Yes, but as I understand, that's not the issue. If you copy the same contents several times over into a single file and upload it to Splunk via "add data" dialog with the settings @jesperbassoe provided, it does get properly split into separate events. True, the final timestamp is getting discarded as it is treated as a linebreaker but apart from that the stream is properly broken into events.

The screenshot however shows the event butchered into separate parts which doesn't really match the LINE_BREAKER definition. So the questions are:

1) Where are the settings defined (on which components; and are there any other conflicting and possibly overriding settings)?

2) How is the file ingested (most probably by monitor input on an UF)?

0 Karma

jesperbassoe
Explorer

@richgalloway You're right. Discarding End Time was a last desperate attempt to see if that made any difference

@PickleRick Settings are defined on indexers.

This is a btool output from one of the indexers :

[nk_pp_tasks]
ADD_EXTRA_TIME_FIELDS = True
ANNOTATE_PUNCT = True
AUTO_KV_JSON = true
BREAK_ONLY_BEFORE =
BREAK_ONLY_BEFORE_DATE = false
CHARSET = UTF-8
DATETIME_CONFIG = /etc/datetime.xml
DEPTH_LIMIT = 1000
DETERMINE_TIMESTAMP_DATE_WITH_SYSTEM_TIME = false
HEADER_MODE =
LB_CHUNK_BREAKER_TRUNCATE = 2000000
LEARN_MODEL = true
LEARN_SOURCETYPE = true
LINE_BREAKER = End Time([^\*]+)
LINE_BREAKER_LOOKBEHIND = 300
MATCH_LIMIT = 100000
MAX_DAYS_AGO = 2000
MAX_DAYS_HENCE = 2
MAX_DIFF_SECS_AGO = 3600
MAX_DIFF_SECS_HENCE = 604800
MAX_EVENTS = 256
MAX_TIMESTAMP_LOOKAHEAD = 128
MUST_BREAK_AFTER =
MUST_NOT_BREAK_AFTER =
MUST_NOT_BREAK_BEFORE =
NO_BINARY_CHECK = true
SEGMENTATION = indexing
SEGMENTATION-all = full
SEGMENTATION-inner = inner
SEGMENTATION-outer = outer
SEGMENTATION-raw = none
SEGMENTATION-standard = standard
SHOULD_LINEMERGE = false
TIME_FORMAT = %Y.%m.%d.%H%M%S
TIME_PREFIX = ^.+[\r\n]\s
TRANSFORMS =
TRUNCATE = 10000
detect_trailing_nulls = false
maxDist = 100
priority =
sourcetype =
termFrequencyWeightedDist = false
unarchive_cmd_start_mode = shell

And file is ingested by monitor input on an UF and delivered directly to the indexers..

0 Karma

jesperbassoe
Explorer

So it turns out the SQL doesn't write the entire event at once and Splunk therefore only reads part of the event.

It worked in our TEST because I dumped the log file and therefore the entire events were there.

The solution was :

multiline_event_extra_waittime = true
time_before_close = 10
Get Updates on the Splunk Community!

Building Reliable Asset and Identity Frameworks in Splunk ES

 Accurate asset and identity resolution is the backbone of security operations. Without it, alerts are ...

Cloud Monitoring Console - Unlocking Greater Visibility in SVC Usage Reporting

For Splunk Cloud customers, understanding and optimizing Splunk Virtual Compute (SVC) usage and resource ...

Automatic Discovery Part 3: Practical Use Cases

If you’ve enabled Automatic Discovery in your install of the Splunk Distribution of the OpenTelemetry ...