Splunk Dev

Problem with monitor file

RobertRi
Communicator

Hi Community!

I have a problem with a big Logfile. This log

  1. produces ~250 events per minute and
  2. rolling every ~ 2:15 hours at a size of 10mb

If i make a realtimesearch for that specific source, some events are disappeared.
I have recorded some of this missing events and found it later in the index with a delay of more than 2 hours.

At my indexer I see sometimes the following error for that sourcetype

AggregatorMiningProcessor - Too many events (300K)

It looks like the universalforwarder doesn't sent new events to the indexer and after a while a hugh load would be send.

Do you have an idea what I can do now?

Thanks
Rob

Tags (1)
0 Karma

FrankVl
Ultra Champion

So, you say you have ~250 events per minute (that's nothing special by the way, I've seen much more talkative log files), but splunk is complaining about 300 thousand events with the same timestamp? Sounds like something is seriously broken with timestamping somehow?

You mention you struggle parsing the date due to the German month names. Can you try just parsing the time? If I'm not mistaken, Splunk will default to the current date, if you only extract the time.

0 Karma

p_gurav
Champion

Can you post full error event from _internal logs. Also this may help: http://docs.splunk.com/Documentation/Splunk/7.1.0/Data/Resolvedataqualityissues

0 Karma

RobertRi
Communicator

I have seen that post, but I'm using 6.4.4 and didn't find that data quality dashboard in the dmc

0 Karma

xpac
SplunkTrust
SplunkTrust

Is this the complete error message?

0 Karma

RobertRi
Communicator

05-15-2018 12:57:49.175 +0200 WARN AggregatorMiningProcessor - Too many events (300K) with the same timestamp: incrementing timestamps 3 second(s) into the future to insure retrievability - data_source="/opt/myapp/myapp.log", data_host="machine1", data_sourcetype="myapp"

0 Karma

RobertRi
Communicator

I haven't found a solution for transforming my german timestamp
/* Di Mai 15 2018 10:42:02.9290 */
into the event timestamp but normaly , if all events were sent to the indexer, the events are near realtime, but this is an other thing.

0 Karma

xpac
SplunkTrust
SplunkTrust

As @FrankVI also said, your problem is most likely based on broken timestamp recognition.
Do you read that file locally, or is it send to you via syslog?

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Observe and Secure All Apps with Splunk

 Join Us for Our Next Tech Talk: Observe and Secure All Apps with SplunkAs organizations continue to innovate ...

What's New in Splunk Observability - August 2025

What's New We are excited to announce the latest enhancements to Splunk Observability Cloud as well as what is ...

Introduction to Splunk AI

How are you using AI in Splunk? Whether you see AI as a threat or opportunity, AI is here to stay. Lucky for ...