Splunk Search

Why search slows down drastically after error "Events may not be returned in sub-second order due to search memory limits"?


Hi folks,

It seems that some searches take an inordinately long time. My search is pretty simple:

index=McAfee cef_product="ePolicy Orchestrator" | top 20 suser

At first the search seems normal, then it displays an error under the search bar (with the red ! triangle):

[indexer hostname] Events may not be returned in sub-second order due to search memory limits in limits.conf:[search]: max_rawsize_perchunk. See search.log for more information.

I reduce the max_rawsize_perchunk from 100M to 80M, per some searching I saw on here and it didn't help, but didn't modify it further.

Once the search error pops up, the searching crawls to about 10 "x of y events matched" where the y only increments slowly in multiples of 10 at a time.

I have plenty of RAM, and the cpu doesn't look to be getting hit particularly hard. Thanks in advance!

Tags (2)

Splunk Employee
Splunk Employee

@jravida - This error will generally show when you are hitting the built in memory limits for max_rawsize_perchunk. This is generally due to large _raw events. This limit is in place to keep the memory usage for the search process under control. If you have resources (RAM) available on the box, I would try setting this value to 200000000 (200MB) and let us know what your results are for new searches!


Did you ever find a solution to these performance problems? We have the same symptoms when searching on one of our indexes, and we get the same error message.

0 Karma


@hettervi... Just curious as to how many events are getting matched? What is the time range you have selected? Also whether you have narrowed down to only the events that you are using top for? What is your specific use case/field names/base search?

Assuming above example, how does the following stats with sort perform?

<Base Search>
| stats count by suser
| sort - count
| head 20
| makeresults | eval message= "Happy Splunking!!!"
0 Karma


We found the cause of the error. The datastream for the index wasn't being parsed correctly; the milliseconds was not being extracted. This meant that if say 100 events all occured in the same second, as Splunk wasn't extracting the milliseconds, they would have all appeared to have happened simultaneously from a Splunk perspective. This apparently confused Splunk, and performance went bad. Fixed the parsing, and now the we can search on the index just fine.

0 Karma
Did you miss .conf21 Virtual?

Good news! The event's keynotes and many of its breakout sessions are now available online, and still totally FREE!