<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: AggregatorMiningProcessor - Log ERROR in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/AggregatorMiningProcessor-Log-ERROR/m-p/336529#M93424</link>
    <description>&lt;P&gt;It means that your multiline event has been in cut into blocks of 1000 lines.&lt;BR /&gt;
see props.conf MAX_EVENTS=1000. You could increase this value...&lt;BR /&gt;
But what you really need to do is check the sourcetype and see if you have configured the  timestamping correctly. This will make sure the events are better parsed.&lt;/P&gt;</description>
    <pubDate>Sat, 27 Jan 2018 22:07:24 GMT</pubDate>
    <dc:creator>Azeemering</dc:creator>
    <dc:date>2018-01-27T22:07:24Z</dc:date>
    <item>
      <title>AggregatorMiningProcessor - Log ERROR</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/AggregatorMiningProcessor-Log-ERROR/m-p/336528#M93423</link>
      <description>&lt;P&gt;Hello all,&lt;/P&gt;

&lt;P&gt;I am facing the below error and I dont know the reason for that. Does someone know the possible reasons to trigger this error?&lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;component - AggregatorMiningProcessor&lt;/STRONG&gt;&lt;/P&gt;

&lt;UL&gt;
&lt;LI&gt;Breaking event because limit of 1000 has been exceeded&lt;/LI&gt;
&lt;/UL&gt;

&lt;P&gt;&lt;STRONG&gt;component - AggregatorMiningProcessor&lt;/STRONG&gt;&lt;/P&gt;

&lt;UL&gt;
&lt;LI&gt;Changing breaking behavior for event stream because MAX_EVENTS (1000) was exceeded without a single event break. Will set BREAK_ONLY_BEFORE_DATE to False, and unset any MUST_NOT_BREAK_BEFORE or MUST_NOT_BREAK_AFTER rules. Typically this will amount to treating this data as single-line only&lt;/LI&gt;
&lt;/UL&gt;

&lt;P&gt;Many thanks and regards,&lt;BR /&gt;
Danillo Pavan&lt;/P&gt;</description>
      <pubDate>Tue, 29 Sep 2020 17:51:37 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/AggregatorMiningProcessor-Log-ERROR/m-p/336528#M93423</guid>
      <dc:creator>danillopavan</dc:creator>
      <dc:date>2020-09-29T17:51:37Z</dc:date>
    </item>
    <item>
      <title>Re: AggregatorMiningProcessor - Log ERROR</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/AggregatorMiningProcessor-Log-ERROR/m-p/336529#M93424</link>
      <description>&lt;P&gt;It means that your multiline event has been in cut into blocks of 1000 lines.&lt;BR /&gt;
see props.conf MAX_EVENTS=1000. You could increase this value...&lt;BR /&gt;
But what you really need to do is check the sourcetype and see if you have configured the  timestamping correctly. This will make sure the events are better parsed.&lt;/P&gt;</description>
      <pubDate>Sat, 27 Jan 2018 22:07:24 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/AggregatorMiningProcessor-Log-ERROR/m-p/336529#M93424</guid>
      <dc:creator>Azeemering</dc:creator>
      <dc:date>2018-01-27T22:07:24Z</dc:date>
    </item>
    <item>
      <title>Re: AggregatorMiningProcessor - Log ERROR</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/AggregatorMiningProcessor-Log-ERROR/m-p/336530#M93425</link>
      <description>&lt;P&gt;Hello Azeemering, thanks for your return.&lt;/P&gt;

&lt;P&gt;However I can see that my events have only 33 lines. Actually i am reading log files that contains 33 lines each one, so my events will have only 33 lines. Dont understand the reason for the event cut into blocks of 1000 lines. My log file contains several dates/times, so I dont want to have the event cut in the timestamping.&lt;/P&gt;

&lt;P&gt;Thanks and regards,&lt;BR /&gt;
Danillo Pavan&lt;/P&gt;</description>
      <pubDate>Sun, 28 Jan 2018 20:00:05 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/AggregatorMiningProcessor-Log-ERROR/m-p/336530#M93425</guid>
      <dc:creator>danillopavan</dc:creator>
      <dc:date>2018-01-28T20:00:05Z</dc:date>
    </item>
    <item>
      <title>Re: AggregatorMiningProcessor - Log ERROR</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/AggregatorMiningProcessor-Log-ERROR/m-p/336531#M93426</link>
      <description>&lt;P&gt;All right, do you have a sample of the data? What does your props.conf look like for this sourcetype?&lt;BR /&gt;
If your event contains multiple date time entries you need to tell Splunk explicitly what the timestamp is of the event. Apply a timestamp prefix, max timestamp lookahead and timestamp strftime&lt;BR /&gt;
Do you have a monitoring console to check input / indexing issues for that sourcetype?&lt;/P&gt;</description>
      <pubDate>Sun, 28 Jan 2018 20:49:59 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/AggregatorMiningProcessor-Log-ERROR/m-p/336531#M93426</guid>
      <dc:creator>Azeemering</dc:creator>
      <dc:date>2018-01-28T20:49:59Z</dc:date>
    </item>
    <item>
      <title>Re: AggregatorMiningProcessor - Log ERROR</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/AggregatorMiningProcessor-Log-ERROR/m-p/336532#M93427</link>
      <description>&lt;P&gt;Hello Azeemering,&lt;/P&gt;

&lt;P&gt;Below we have the props and transform files:&lt;/P&gt;

&lt;P&gt;PROPS.&lt;BR /&gt;
[sourcetype]&lt;BR /&gt;
TRUNCATE = 10000&lt;BR /&gt;
DATETIME_CONFIG = &lt;BR /&gt;
LINE_BREAKER = (= User Time (Seconds) : \d+\n= \w{3} \d{2}\/\d{2}\/\d{2} \d{2}:\d{2}:\d{2} [A-Z]+)&lt;BR /&gt;
NO_BINARY_CHECK = true&lt;BR /&gt;
SHOULD_LINEMERGE = false&lt;BR /&gt;
BREAK_ONLY_BEFORE_DATE = false&lt;BR /&gt;
TIME_PREFIX = = [A-Z][a-z]+\s&lt;BR /&gt;
TIME_FORMAT = %m/%d/%y %H:%M:%S %Z&lt;BR /&gt;
SEDCMD-applychange01=s/[\r\n]\s*[A-z]+.+//g&lt;BR /&gt;
SEDCMD-applychange02=s/(**+.&lt;EM&gt;)//g&lt;BR /&gt;
SEDCMD-applychange04=s/(+++.&lt;/EM&gt;)//g&lt;BR /&gt;
SEDCMD-applychange05=s/(==+[\r\n]*)//g&lt;BR /&gt;
TRANSFORMS-set= setNullJob,setParsingJob&lt;/P&gt;

&lt;P&gt;TRANSFORM&lt;BR /&gt;
[setNullJob]&lt;BR /&gt;
REGEX = .&lt;BR /&gt;
DEST_KEY = queue&lt;BR /&gt;
FORMAT = nullQueue&lt;/P&gt;

&lt;P&gt;[setParsingJob]&lt;BR /&gt;
REGEX = R3BRP#DECOUPLE_NFE&lt;BR /&gt;
DEST_KEY = queue&lt;BR /&gt;
FORMAT = indexQueue&lt;/P&gt;

&lt;P&gt;Then we have sample of log file that is being read by SPLUNK:&lt;/P&gt;

&lt;P&gt;===============================================================&lt;BR /&gt;
= JOB       : R3BRP#DECOUPLE_NFE[(0006 12/27/17),(0AAAAAAAAAAIO5A6)].CL_S09_IFIPD_DECOUPLE_NFE_R3BRP_01&lt;BR /&gt;
= USER      : tws            631/S/*ATHOCO/IBM/AUTOMATION_COORD_HORTOLANDIA/&lt;BR /&gt;
= JCLFILE   : / -job IFIPD_DECOUPLE_NFE -user FF_PRO1 -i 23154800 -c a&lt;BR /&gt;
= Job Number: 64684184&lt;/P&gt;

&lt;H1&gt;= Wed 12/27/17 09:35:39 BRST&lt;/H1&gt;

&lt;P&gt;+++ IBM Tivoli Workload Scheduler for Applications, method R3BATCH 8.5.0 (patchrev 1 - 16:42:24 Jun 13 2014)&lt;BR /&gt;
+++ is called with following parameters:&lt;BR /&gt;
+++ -t LJ -c R3BRP,SAPECCPINST1,ACSXTWS02 -n 172.22.8.248 -p 31111 -r 1961,1961 -s 0AAAAAAAAAAIO5A6 -d 20171227,1514332800 -l twsuser1 -o /amb/local/tws/sapeccpinst1/TWS/stdlist/2017.12.27/O64684184.0935 -j CL_S09_IFIPD_DECOUPLE_NFE_R3BRP_01,64684184 -- / -job IFIPD_DECOUPLE_NFE -user FF_PRO1 -i 23154800 -c a &lt;BR /&gt;
+++ EEWO1031I The Tivoli Workload Scheduler home directory was found: ./..&lt;BR /&gt;
+++ EEWO1027I The RFC connection is established: (1)&lt;BR /&gt;
+++ EEWO1023I Started the R/3 job at the following date and time: 12/27-09:35 : IFIPD_DECOUPLE_NFE, 09354300&lt;BR /&gt;
Wed Dec 27 09:35:40 2017&lt;BR /&gt;
+++ EEWO1007I The job status has been set to EXEC: IFIPD_DECOUPLE_NFE               09354300&lt;BR /&gt;
+++ EEWO1006I Job status: IFIPD_DECOUPLE_NFE               09354300 FINISHED&lt;BR /&gt;
+++ EEWO1061I Job IFIPD_DECOUPLE_NFE               with job ID 09354300 was executed on SAP application server XXXXXXXXXXX.&lt;BR /&gt;
+++ EEWO1048I Retrieving the joblog of a job:: IFIPD_DECOUPLE_NFE              , 09354300&lt;BR /&gt;
*** WARNING 914 ***  EEWO0914W An internal error has occurred. Either the joblog or the job protocol for the following job does not exist:&lt;BR /&gt;
Job name: IFIPD_DECOUPLE_NFE&lt;BR /&gt;
Job ID: 09354300. &lt;BR /&gt;
*** WARNING 904 ***  EEWO0904W The program could not copy the joblog to stdout. &lt;BR /&gt;
*** WARNING 914 ***  EEWO0914W An internal error has occurred. Either the joblog or the job protocol for the following job does not exist:&lt;BR /&gt;
Job name: IFIPD_DECOUPLE_NFE&lt;BR /&gt;
Job ID: 09354300. &lt;BR /&gt;
+++ EEWO1012I BDC sessions are complete at: 12/27-09:36 : 0 &lt;/P&gt;

&lt;H1&gt;+++ EEWO1017I The job completed normally at the following date and time: 12/27-09:36 &lt;/H1&gt;

&lt;P&gt;= Exit Status           : 0&lt;BR /&gt;
= System Time (Seconds) : 0     Elapsed Time (Minutes) : 0&lt;BR /&gt;
= User Time (Seconds)   : 0&lt;/P&gt;

&lt;H1&gt;= Wed 12/27/17 09:36:18 BRST&lt;/H1&gt;</description>
      <pubDate>Tue, 29 Sep 2020 17:55:28 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/AggregatorMiningProcessor-Log-ERROR/m-p/336532#M93427</guid>
      <dc:creator>danillopavan</dc:creator>
      <dc:date>2020-09-29T17:55:28Z</dc:date>
    </item>
  </channel>
</rss>

