Archive

Problem with monitor file

Communicator

Hi Community!

I have a problem with a big Logfile. This log

  1. produces ~250 events per minute and
  2. rolling every ~ 2:15 hours at a size of 10mb

If i make a realtimesearch for that specific source, some events are disappeared.
I have recorded some of this missing events and found it later in the index with a delay of more than 2 hours.

At my indexer I see sometimes the following error for that sourcetype

AggregatorMiningProcessor - Too many events (300K)

It looks like the universalforwarder doesn't sent new events to the indexer and after a while a hugh load would be send.

Do you have an idea what I can do now?

Thanks
Rob

Tags (1)
0 Karma

Ultra Champion

So, you say you have ~250 events per minute (that's nothing special by the way, I've seen much more talkative log files), but splunk is complaining about 300 thousand events with the same timestamp? Sounds like something is seriously broken with timestamping somehow?

You mention you struggle parsing the date due to the German month names. Can you try just parsing the time? If I'm not mistaken, Splunk will default to the current date, if you only extract the time.

0 Karma

Champion

Can you post full error event from _internal logs. Also this may help: http://docs.splunk.com/Documentation/Splunk/7.1.0/Data/Resolvedataqualityissues

0 Karma

Communicator

I have seen that post, but I'm using 6.4.4 and didn't find that data quality dashboard in the dmc

0 Karma

SplunkTrust
SplunkTrust

Is this the complete error message?

0 Karma

Communicator

05-15-2018 12:57:49.175 +0200 WARN AggregatorMiningProcessor - Too many events (300K) with the same timestamp: incrementing timestamps 3 second(s) into the future to insure retrievability - datasource="/opt/myapp/myapp.log", datahost="machine1", data_sourcetype="myapp"

0 Karma

Communicator

I haven't found a solution for transforming my german timestamp
/* Di Mai 15 2018 10:42:02.9290 */
into the event timestamp but normaly , if all events were sent to the indexer, the events are near realtime, but this is an other thing.

0 Karma

SplunkTrust
SplunkTrust

As @FrankVI also said, your problem is most likely based on broken timestamp recognition.
Do you read that file locally, or is it send to you via syslog?

0 Karma