- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

*Forcefully terminated search process with sid=1517416303.2383_ABC123 since its physical memory usage (36521.336000 MB) has exceeded the physical memory threshold specified in limits.conf/search_process_memory_usage_threshold (32768.000000 MB).*
Does anyone have a solution for this issue where using stats dc(field) results in forceful termination of the search? I cannot raise the memory allowance any higher (currently 32Gb) which risks our searcher going down when a user runs this type of query.
Obviously, it is caused by higher distinct counts but it is nothing unreasonable about the query.
Surely, Splunk has seen this many times and has a solution?
Is there some additional configuration that will allow us to workaround the high memory consumption for this type of search?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

After dealing with this for a few years, it turns out that when a dc(field) causes out of memory forceful termination, just refactor the query to use :
... | stats count by field | stats count
The tradeoff here is that this type of search will consume more disk, however, a reasonable amount of memory will be consumed which will less likely cause the search to be forcefully terminated due to memory. The new risk now is possible termination caused by exceeding disk quota allocated for search.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

After dealing with this for a few years, it turns out that when a dc(field) causes out of memory forceful termination, just refactor the query to use :
... | stats count by field | stats count
The tradeoff here is that this type of search will consume more disk, however, a reasonable amount of memory will be consumed which will less likely cause the search to be forcefully terminated due to memory. The new risk now is possible termination caused by exceeding disk quota allocated for search.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I had the same issue, I used that strat and I ran out of search disk memory quota.
What about the chunk_size parameter? Does it make long DC() searches possible?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
To reduce DC memory impact you can change
[stats]
dc_digest_bits=9
But the result can be approximative =/
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Yeah, estimate is not ok and in the case where it is estdc can be used. Depending on the data set, sometimes it works and other times still fails due to splunkd forcefully terminated.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Can you share your search string? How much data is this searching and for what size time window?
