Splunk Search

Why is there forceful termination of the search process when using stats dc()?

the_wolverine
Champion
*Forcefully terminated search process with sid=1517416303.2383_ABC123 since its physical memory usage (36521.336000 MB) has exceeded the physical memory threshold specified in limits.conf/search_process_memory_usage_threshold (32768.000000 MB).*

Does anyone have a solution for this issue where using stats dc(field) results in forceful termination of the search? I cannot raise the memory allowance any higher (currently 32Gb) which risks our searcher going down when a user runs this type of query.

Obviously, it is caused by higher distinct counts but it is nothing unreasonable about the query.

Surely, Splunk has seen this many times and has a solution?
Is there some additional configuration that will allow us to workaround the high memory consumption for this type of search?

0 Karma
1 Solution

the_wolverine
Champion

After dealing with this for a few years, it turns out that when a dc(field) causes out of memory forceful termination, just refactor the query to use :

... | stats count by field | stats count

The tradeoff here is that this type of search will consume more disk, however, a reasonable amount of memory will be consumed which will less likely cause the search to be forcefully terminated due to memory. The new risk now is possible termination caused by exceeding disk quota allocated for search.

View solution in original post

0 Karma

the_wolverine
Champion

After dealing with this for a few years, it turns out that when a dc(field) causes out of memory forceful termination, just refactor the query to use :

... | stats count by field | stats count

The tradeoff here is that this type of search will consume more disk, however, a reasonable amount of memory will be consumed which will less likely cause the search to be forcefully terminated due to memory. The new risk now is possible termination caused by exceeding disk quota allocated for search.

0 Karma

isaiz
Loves-to-Learn Lots

I had the same issue, I used that strat and I ran out of search disk memory quota.

 

What about the chunk_size parameter? Does it make long DC() searches possible?

0 Karma

jeanyvesnolen
Path Finder

To reduce DC memory impact you can change

[stats]
dc_digest_bits=9

But the result can be approximative =/

limits.conf reference

0 Karma

the_wolverine
Champion

Yeah, estimate is not ok and in the case where it is estdc can be used. Depending on the data set, sometimes it works and other times still fails due to splunkd forcefully terminated.

0 Karma

davpx
Communicator

Can you share your search string? How much data is this searching and for what size time window?

0 Karma
Get Updates on the Splunk Community!

What's New in Splunk Enterprise 9.4: Features to Power Your Digital Resilience

Hey Splunky People! We are excited to share the latest updates in Splunk Enterprise 9.4. In this release we ...

Take Your Breath Away with Splunk Risk-Based Alerting (RBA)

WATCH NOW!The Splunk Guide to Risk-Based Alerting is here to empower your SOC like never before. Join Haylee ...

SignalFlow: What? Why? How?

What is SignalFlow? Splunk Observability Cloud’s analytics engine, SignalFlow, opens up a world of in-depth ...