I'm encountering this in Splunk 6 (6.1.2, to be specific).
My saved search is EXTREMELY simple:
That's it. No subsearches, nothing fancy, just writing that data to a summary index.
I can run that search over, and over, and over manually and it returns the correct number of events (~850,000) in 150 seconds, give or take 20-30 seconds. In the saved search, it gets to 500,000 records and just quits. There are no errors or anything that I can find, it just stops writing data to the summary index.
The fill_summary_index.py script doesn't fill the gap, either, just duplicates the portion of the data that was already there.
I ended up writing a special saved search to manually backfill the portion of time that was missing, but this is happening about once a week; I can't keep manually fixing it that way.
Is there some setting for the maximum number of results that can be written to a summary index based on a single saved search? 500,000 seems an awfully convenient, round number.
NOTE: I already have maxresultrows set to 10 million in limits.conf (yeah, it's big, I know, but we need it), so that's not what's truncating the results at 500,000.