Monitoring Splunk

How can I generate a search which uses lots of memory?


We are about to enable the enable_memory_tracker feature.

We'll use -

enable_memory_tracker = true 
search_process_memory_usage_percentage_threshold = 13
search_process_memory_usage_threshold = 4000

In order to test it, how can I generate searches that consume gigabytes of memory?

Labels (1)
Tags (1)
0 Karma


Use EventGen ( to generate thousands or millions of random events containing dozens or hundreds of fields, and spanning several years.

Send those events to the index of your choice.

Then run a verbose, all-time search using:

index=your_index_name_here |table *

A similar approach which doesn't require EventGen would be to take a sample file such as /var/log/messages, and use a bash script or simple for loop to copy it a gazillion times to some directory, while changing the file name each time, and ingest all those using a forwarder to populate your index. Then run the same search as described.

Either should work. I'm sure there are other solutions as well, but those two options come to mind first...


The query (index=* OR index=_*) | table * did it and it produced the message in the UI, saying -

-- The search processs with sid=1590066363.14344 was forcefully terminated because its physical memory usage (6456.715000 MB) has exceeded the 'searchprocessmemoryusagethreshold' (4000.000000 MB) setting in limits.conf.

Where do we enable the MC admin message for this case, when it happens?

0 Karma


Glad to hear that worked for you 😄 !

There's a canned DMC alert for this you can enable named "DMC Alert - Critical System Physical Memory Usage".

Or, you can create your own, obviously.

If you feel like this reply solved your issue please consider accepting the answer, so others can benefit as well.


Right @codebuilder, but it's a generic message. It seems that the index=_internal sourcetype=splunkd component=SearchProcessMemoryTracker event_message="*Forcefully terminated*" brings data about these cases.

0 Karma