Getting Data In

AggregatorMiningProcessor - Log ERROR

danillopavan
Communicator

Hello all,

I am facing the below error and I dont know the reason for that. Does someone know the possible reasons to trigger this error?

component - AggregatorMiningProcessor

  • Breaking event because limit of 1000 has been exceeded

component - AggregatorMiningProcessor

  • Changing breaking behavior for event stream because MAX_EVENTS (1000) was exceeded without a single event break. Will set BREAK_ONLY_BEFORE_DATE to False, and unset any MUST_NOT_BREAK_BEFORE or MUST_NOT_BREAK_AFTER rules. Typically this will amount to treating this data as single-line only

Many thanks and regards,
Danillo Pavan

Tags (1)
0 Karma

Azeemering
Builder

It means that your multiline event has been in cut into blocks of 1000 lines.
see props.conf MAX_EVENTS=1000. You could increase this value...
But what you really need to do is check the sourcetype and see if you have configured the timestamping correctly. This will make sure the events are better parsed.

0 Karma

danillopavan
Communicator

Hello Azeemering, thanks for your return.

However I can see that my events have only 33 lines. Actually i am reading log files that contains 33 lines each one, so my events will have only 33 lines. Dont understand the reason for the event cut into blocks of 1000 lines. My log file contains several dates/times, so I dont want to have the event cut in the timestamping.

Thanks and regards,
Danillo Pavan

0 Karma

Azeemering
Builder

All right, do you have a sample of the data? What does your props.conf look like for this sourcetype?
If your event contains multiple date time entries you need to tell Splunk explicitly what the timestamp is of the event. Apply a timestamp prefix, max timestamp lookahead and timestamp strftime
Do you have a monitoring console to check input / indexing issues for that sourcetype?

0 Karma

danillopavan
Communicator

Hello Azeemering,

Below we have the props and transform files:

PROPS.
[sourcetype]
TRUNCATE = 10000
DATETIME_CONFIG =
LINE_BREAKER = (= User Time (Seconds) : \d+\n= \w{3} \d{2}\/\d{2}\/\d{2} \d{2}:\d{2}:\d{2} [A-Z]+)
NO_BINARY_CHECK = true
SHOULD_LINEMERGE = false
BREAK_ONLY_BEFORE_DATE = false
TIME_PREFIX = = [A-Z][a-z]+\s
TIME_FORMAT = %m/%d/%y %H:%M:%S %Z
SEDCMD-applychange01=s/[\r\n]\s*[A-z]+.+//g
SEDCMD-applychange02=s/(**+.)//g
SEDCMD-applychange04=s/(+++.
)//g
SEDCMD-applychange05=s/(==+[\r\n]*)//g
TRANSFORMS-set= setNullJob,setParsingJob

TRANSFORM
[setNullJob]
REGEX = .
DEST_KEY = queue
FORMAT = nullQueue

[setParsingJob]
REGEX = R3BRP#DECOUPLE_NFE
DEST_KEY = queue
FORMAT = indexQueue

Then we have sample of log file that is being read by SPLUNK:

===============================================================
= JOB : R3BRP#DECOUPLE_NFE[(0006 12/27/17),(0AAAAAAAAAAIO5A6)].CL_S09_IFIPD_DECOUPLE_NFE_R3BRP_01
= USER : tws 631/S/*ATHOCO/IBM/AUTOMATION_COORD_HORTOLANDIA/
= JCLFILE : / -job IFIPD_DECOUPLE_NFE -user FF_PRO1 -i 23154800 -c a
= Job Number: 64684184

= Wed 12/27/17 09:35:39 BRST

+++ IBM Tivoli Workload Scheduler for Applications, method R3BATCH 8.5.0 (patchrev 1 - 16:42:24 Jun 13 2014)
+++ is called with following parameters:
+++ -t LJ -c R3BRP,SAPECCPINST1,ACSXTWS02 -n 172.22.8.248 -p 31111 -r 1961,1961 -s 0AAAAAAAAAAIO5A6 -d 20171227,1514332800 -l twsuser1 -o /amb/local/tws/sapeccpinst1/TWS/stdlist/2017.12.27/O64684184.0935 -j CL_S09_IFIPD_DECOUPLE_NFE_R3BRP_01,64684184 -- / -job IFIPD_DECOUPLE_NFE -user FF_PRO1 -i 23154800 -c a
+++ EEWO1031I The Tivoli Workload Scheduler home directory was found: ./..
+++ EEWO1027I The RFC connection is established: (1)
+++ EEWO1023I Started the R/3 job at the following date and time: 12/27-09:35 : IFIPD_DECOUPLE_NFE, 09354300
Wed Dec 27 09:35:40 2017
+++ EEWO1007I The job status has been set to EXEC: IFIPD_DECOUPLE_NFE 09354300
+++ EEWO1006I Job status: IFIPD_DECOUPLE_NFE 09354300 FINISHED
+++ EEWO1061I Job IFIPD_DECOUPLE_NFE with job ID 09354300 was executed on SAP application server XXXXXXXXXXX.
+++ EEWO1048I Retrieving the joblog of a job:: IFIPD_DECOUPLE_NFE , 09354300
*** WARNING 914 *** EEWO0914W An internal error has occurred. Either the joblog or the job protocol for the following job does not exist:
Job name: IFIPD_DECOUPLE_NFE
Job ID: 09354300.
*** WARNING 904 *** EEWO0904W The program could not copy the joblog to stdout.
*** WARNING 914 *** EEWO0914W An internal error has occurred. Either the joblog or the job protocol for the following job does not exist:
Job name: IFIPD_DECOUPLE_NFE
Job ID: 09354300.
+++ EEWO1012I BDC sessions are complete at: 12/27-09:36 : 0

+++ EEWO1017I The job completed normally at the following date and time: 12/27-09:36

= Exit Status : 0
= System Time (Seconds) : 0 Elapsed Time (Minutes) : 0
= User Time (Seconds) : 0

= Wed 12/27/17 09:36:18 BRST

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...