Getting Data In

Why did Splunk suddenly start indexing more than 100,000 Events on the same timestamp?

marcokrueger
Path Finder

I have a single search that stores many events (~500,000) on the same timestamp.
As I understood, splunk chunks the data to 100,000 and stores the next 100,000 to the next second.
This works fine but, last week splunk stores all the 500,000 to the same timestamp so I can't read the data because I get

"Events may not be returned in sub-second order due to search memory limits configured in limits.conf:[search]:max_rawsize_perchunk. See search.log for more information."

and the search slows down massively.

Does anyone knows why splunk doesn't chunk the results as usual?

Best regards
Marco

0 Karma

woodcock
Esteemed Legend

Yes, if you have a breakdown in your timestamping, Splunk will default to setting the untimestampable event to the timestamp of the previous event. In this case, you should see splunkd.log logs in index=_internal from your indexers like this:

DateParserVerbose - Failed to parse timestamp. Defaulting to timestamp of previous event ...

The solution is to fix your broken timestamp configuration.

0 Karma
Get Updates on the Splunk Community!

Fueling your curiosity with new Splunk ILT and eLearning courses

At Splunk Education, we’re driven by curiosity—both ours and yours! That’s why we’re committed to delivering ...

Splunk AI Assistant for SPL 1.1.0 | Now Personalized to Your Environment for Greater ...

Splunk AI Assistant for SPL has transformed how users interact with Splunk, making it easier than ever to ...

Unleash Unified Security and Observability with Splunk Cloud Platform

     Now Available on Microsoft AzureOn Demand Now Step boldly into the AI revolution with enhanced security ...