Security & the Enterprise
Much secured. So patch!

Assist with event breaker first time user of props.conf & transforms.conf

govardha
Path Finder

I have a 2 fold question here, please look at the events below.

1.  I would like to break the events anytime the splunkforwarder hits the timestamp Ex: 20210506231928.  Keep in mind certain events could literally be 500kb of data. I am in US Eastern, but the time stamp is in GMT, so ideally I would like splunk to set _time to the correct time adjusted for the app publishing in GMT.  I am having a bit tough time visualizing how to do the props.conf/transforms.conf to extract the event properly into _raw and even more how to test it in my lab env.

2.  I *think* if I get the first one, the 2nd one should be pretty simple.  I would like to use the INGEST_EVAL function once again in the splunkforwarder and further calculate the length of the _raw and discard it if it is more than 1024 bytes.

INGEST_EVAL = list=if(length(_raw)>1024,"nullqueue"," ")



20210506231928 INFO: Thread 139892443047680:RequestStart session:126
20210506231928 INFO: Thread 139892443047680:CurrentQuery "INTERNAL_QUERY__CHECK_FOR_LOCATOR_CHANGE"
20210506231928 INFO: Thread 139892443047680:GraphQueries NumRequests 1 GlobalQueryID dbserver01.65313.20210506231928.587.7 TopLevelQuery INTERNAL_QUERY__CHECK_FOR_LOCATOR_CHANGE
<GRAPH START_TIME=-1893456000.000000 END_TIME=2147483647.000000 SYMBOL_DATE=0><EP ID=0 NAME=CHECK_FOR_LOCATOR_CHANGE TICK_TYPES=TRD> <PARAMS DB_NAME=[UNDERLYING_REF]/> </EP><SYMBOLS> <SYMBOL NAME="LOCAL::"/></SYMBOLS></GRAPH>
EndGraphQueries
20210506231928 INFO: Thread 139894110279424:ConnectionStart session:114 User bizuer Host 127.0.0.1
20210506231928 INFO: Thread 139894110279424:RequestStart session:114

 

Any pointers are greatly appreciated. 

0 Karma
1 Solution

s2_splunk
Splunk Employee
Splunk Employee

What kind of forwarder are you using?  What does your source log file look like, is it already one line per event?

If they are already single line events, you don't have to worry about event breaking. If they aren't, you can use something like 

EVENT_BREAKER_ENABLE=true
EVENT_BREAKER=([\r\n]\d{14}+)

 in your inputs.conf and have the proper settings on your indexer to process timestamps and multi-line events. If the timestamp doesn't contain a timezone, the timezone of the sending forwarder is used. The UI will translate that into your local time when searching based on your user timezone settings.

INGEST_EVAL has to happen where event parsing occurs, which is either a Heavy Forwarder or the indexer itself. The UF doesn't really have that capability to act on event-level details; it just processes 64KB chunks of data. So, for your second question you can deploy a props.conf on your first parsing Splunk server (depending on your architecture) with 

[yoursourcetype]
TRANSFORMS-trashlongevents=nullQueueLargeEvents

 and a transforms.conf with 

[nullQueueLargeEvents]
INGEST_EVAL = queue=if(length(_raw)>1024,"nullQueue","indexQueue")

note queue variable name and proper camel-casing of queue names.

You will find this diagram to be very helpful in understanding which Splunk component (UF, HF/Indexer) processes which setting.

HTH

View solution in original post

s2_splunk
Splunk Employee
Splunk Employee

What kind of forwarder are you using?  What does your source log file look like, is it already one line per event?

If they are already single line events, you don't have to worry about event breaking. If they aren't, you can use something like 

EVENT_BREAKER_ENABLE=true
EVENT_BREAKER=([\r\n]\d{14}+)

 in your inputs.conf and have the proper settings on your indexer to process timestamps and multi-line events. If the timestamp doesn't contain a timezone, the timezone of the sending forwarder is used. The UI will translate that into your local time when searching based on your user timezone settings.

INGEST_EVAL has to happen where event parsing occurs, which is either a Heavy Forwarder or the indexer itself. The UF doesn't really have that capability to act on event-level details; it just processes 64KB chunks of data. So, for your second question you can deploy a props.conf on your first parsing Splunk server (depending on your architecture) with 

[yoursourcetype]
TRANSFORMS-trashlongevents=nullQueueLargeEvents

 and a transforms.conf with 

[nullQueueLargeEvents]
INGEST_EVAL = queue=if(length(_raw)>1024,"nullQueue","indexQueue")

note queue variable name and proper camel-casing of queue names.

You will find this diagram to be very helpful in understanding which Splunk component (UF, HF/Indexer) processes which setting.

HTH

Get Updates on the Splunk Community!

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...