Hi,
I'm facing issue with data forwarding to splunk. i'm not sure where data being dropped and its happening randomly.
Details:
I have text (key-value pair) file with 6.5 million lines(events) with same timestamp (_time) configured.
but while ingesting file to splunk via Heavy forwarder, it automatically incrementing _time +1 sec for every 100k or 200k events randomly.
Observation:
if the _time +1 sec increment happens for every 100k events, then no issues data completely ingest to splunk.
if some times _time +1 sec increment happens for 200+k events, we are observing data drop, only 4 to 4.5 million events got ingested out of 6.5 million events.
splunk log giving this warning:
WARN DateParserVerbose - The same timestamp has been used for 500K consecutive times. If more than 200K events have the same timestamp, not all events may be retrieveable
Splunk Environment details:
Splunk Version: 7.2.6
OS: AWS Linux Machine
Could you please advice what is root cause of this issue and remedy for same.
Thanks In Advance !!!.
Mani
are the events really have the same timestamp?
see nice elaborated answer here:
https://answers.splunk.com/answers/303/whats-max-events-i-can-have-timestamped-with-a-particular-sec...
Thanks Adonio for reply..
yes all 6.5 million event has same timestamp.
My concern is the data drop happening randomly. not consistence. how some times increment +1 sec for every 100k events and some time 200+k events.
Does Splunk version 7.2.6 has the capability to handle this scenario?
Could you please advise any work around for same. is there any limits needs to updated to handle this?
Thanks,
Mani
you can add the index time timestamp to each event
in props.conf
under the relevant sourcetype stanza, add:
DATETIME_CONFIG = CURRENT
read here more:
https://docs.splunk.com/Documentation/Splunk/8.0.0/admin/Propsconf