Getting Data In

how to handle thousands of events with the same timestamp...

a212830
Champion

Hi,

I have a feed that collects snmp performance stats every 5 minutes. I am parsing this logfile with a heavy forwarder and selectively picking which events that I'm interested in. I keep seeing the following message in my splunkd.log:

12-03-2013 22:40:03.658 -0500 WARN DateParserVerbose - The same timestamp has been used for 400K consecutive times. If more than 200K events have the same timestamp, not all events may be retrieveable. Context: source::/usr/local/nsmutils/export/current/ratedata_vlmmk286.fmr.com_2013-12-03T21-57-33.244|host::vlrtp391|snmp_metrics|

Is the forwarder ignoring events? How can I ensure that all events are being processed?

hettervik
Builder

I've encountered some serious performance issues when a lot of events have the exact same time stamp in a search, so I think "forcing" Splunk to index a lot of events with the same timestamp would be a bad idea. That being said, how are you setting the timestamp on the events, extracting it from the raw events, setting it on index time, or using the timestamp from the file name? If the latter is the case, you could try using the timestamps from the raw events (if any) or set DATETIME_CONFIG=CURRENT in props to set a timestamp at the time of indexing.

0 Karma
Get Updates on the Splunk Community!

Automatic Discovery Part 1: What is Automatic Discovery in Splunk Observability Cloud ...

If you’ve ever deployed a new database cluster, spun up a caching layer, or added a load balancer, you know it ...

Real-Time Fraud Detection: How Splunk Dashboards Protect Financial Institutions

Financial fraud isn't slowing down. If anything, it's getting more sophisticated. Account takeovers, credit ...

Splunk + ThousandEyes: Correlate frontend, app, and network data to troubleshoot ...

 Are you tired of troubleshooting delays caused by siloed frontend, application, and network data? We've got a ...