Knowledge Management

Is there a setting for the maximum number of results that can be written to a summary index from a single saved search?

Builder

Basically the same problem as reported in https://answers.splunk.com/answers/94725/issue-with-summary-indexing-saved-searches-runs-fine-but-su...

I'm encountering this in Splunk 6 (6.1.2, to be specific).

My saved search is EXTREMELY simple:

index="my_index" field="my_field_value"

That's it. No subsearches, nothing fancy, just writing that data to a summary index.

I can run that search over, and over, and over manually and it returns the correct number of events (~850,000) in 150 seconds, give or take 20-30 seconds. In the saved search, it gets to 500,000 records and just quits. There are no errors or anything that I can find, it just stops writing data to the summary index.

The fill_summary_index.py script doesn't fill the gap, either, just duplicates the portion of the data that was already there.

I ended up writing a special saved search to manually backfill the portion of time that was missing, but this is happening about once a week; I can't keep manually fixing it that way.

Is there some setting for the maximum number of results that can be written to a summary index based on a single saved search? 500,000 seems an awfully convenient, round number.

NOTE: I already have maxresultrows set to 10 million in limits.conf (yeah, it's big, I know, but we need it), so that's not what's truncating the results at 500,000.

1 Solution

In savedsearches.conf check out dispatch.max_count. This is defaulted to 500,000.

View solution in original post

In savedsearches.conf check out dispatch.max_count. This is defaulted to 500,000.

View solution in original post

Builder

Ah-ha! I bet that's what it is.

I'm going to try that, then fire the backfill script. Let you know a.s.a.p. if that's what it is.

0 Karma

Builder

Looks like that was it.

Thanks for your quick response! I was looking in limits.conf, never thought about looking in savedsearches.conf.