Getting Data In

Why did Splunk suddenly start indexing more than 100,000 Events on the same timestamp?

marcokrueger
Path Finder

I have a single search that stores many events (~500,000) on the same timestamp.
As I understood, splunk chunks the data to 100,000 and stores the next 100,000 to the next second.
This works fine but, last week splunk stores all the 500,000 to the same timestamp so I can't read the data because I get

"Events may not be returned in sub-second order due to search memory limits configured in limits.conf:[search]:max_rawsize_perchunk. See search.log for more information."

and the search slows down massively.

Does anyone knows why splunk doesn't chunk the results as usual?

Best regards
Marco

0 Karma

woodcock
Esteemed Legend

Yes, if you have a breakdown in your timestamping, Splunk will default to setting the untimestampable event to the timestamp of the previous event. In this case, you should see splunkd.log logs in index=_internal from your indexers like this:

DateParserVerbose - Failed to parse timestamp. Defaulting to timestamp of previous event ...

The solution is to fix your broken timestamp configuration.

0 Karma
Get Updates on the Splunk Community!

Developer Spotlight with Paul Stout

Welcome to our very first developer spotlight release series where we'll feature some awesome Splunk ...

State of Splunk Careers 2024: Maximizing Career Outcomes and the Continued Value of ...

For the past four years, Splunk has partnered with Enterprise Strategy Group to conduct a survey that gauges ...

Data-Driven Success: Splunk & Financial Services

Splunk streamlines the process of extracting insights from large volumes of data. In this fast-paced world, ...