Splunk Search

How to set the BREAK_ONLY_BEFORE?

neha22
Explorer
 

I am not sure of how to set the BREAK_ONLY_BEFORE I have tried the below setting.. all my logs are of log4j format and starts at [2022-04-05 11:18:23,839] format

BREAK_ONLY_BEFORE: date 

My logs are  which are send to splunk through fluentd in as different events:

[2022-04-05 11:18:23,839] WARN Error while loading: connectors-versions.properties (com.amadeus.scp.kafka.connect.utils.Version)
java.lang.NullPointerException
    at java.util.Properties$LineReader.readLine(Properties.java:434)
    at java.util.Properties.load0(Properties.java:353)
    at java.util.Properties.load(Properties.java:341)
    at com.amadeus.scp.kafka.connect.utils.Version.<clinit>(Version.java:47)
    at com.amadeus.scp.kafka.connect.connectors.kafka.source.router.K2KRouterSourceConnector.version(K2KRouterSourceConnector.java:62)
    at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.versionFor(DelegatingClassLoader.java:380)
    at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.versionFor(DelegatingClassLoader.java:385)
    at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.getPluginDesc(DelegatingClassLoader.java:355)
    at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanPluginPath(DelegatingClassLoader.java:328)
    at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanUrlsAndAddPlugins(DelegatingClassLoader.java:261)
    at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.registerPlugin(DelegatingClassLoader.java:253)
    at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initPluginLoader(DelegatingClassLoader.java:222)
    at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:199)
    at org.apache.kafka.connect.runtime.isolation.Plugins.<init>(Plugins.java:60)
    at org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:91)
    at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:78)
Labels (1)
0 Karma

VatsalJagani
SplunkTrust
SplunkTrust

@neha22 - I wouldn't suggest SHOULD_LINEMERGE=true as it is not performance effective. Try the below configuration with LINE_ BREAKER.

Just FYI, this will only apply to new events coming into Splunk, not to existing events.

Put this configuration on forwarders and indexers level.

 

LINE_BREAKER = ([\n\r]+)\[\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2},\d+\]\s+
SHOULD_LINEMERGE = false
TIME_PREFIX = ^[
TIME_FORMAT = %Y-%m-%d %H:%M:%S,%3N
MAX_TIMESTAMP_LOOKAHEAD = 25

 

 

I hope this helps!! Consider upvoting if it does!!!

neha22
Explorer

I am using the HEC method

0 Karma

VatsalJagani
SplunkTrust
SplunkTrust

If you are using the HEC endpoint (/services/collector/event ) then Parsing (LINE_BREAKING), Merging (SHOULD_LINEMERGE, timestamp extraction) will not work.

 

Use HEC this HEC endpoint instead - /services/collector/raw 

 

Reference - https://docs.splunk.com/Documentation/Splunk/8.2.5/Data/HECRESTendpoints 

gcusello
SplunkTrust
SplunkTrust

Hi @neha22,

I'd define the Timestamp format and position and I'd use it for event breaking,something like this:

[your_sourcetype]
TIME_PREFIX = ^\[
TIME_FORMAT = %Y-%m-%d %H:%M:%S,%3N
SHOULD_LINEMERGE = True

Ciao.

Giuseppe

0 Karma

neha22
Explorer

TIME_PREFIX: "^[",
TIME_FORMAT: "([%Y-%m-%d %H:%M:%S,%3N]+)",

i tried as above but the logs still displayed as different events not a single one.

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...