I'm using Splunk Enterprise 7.2.6 and I'm trying to export some data from past with command below:
/path/to/splunk search "index=my_index earliest=12/02/2018:00:00:00 latest=12/03/2018:23:59:00" -output rawdata -maxout 0 -auth user:password
My servers has 64Gb RAM and it almost free:
total used free shared buffers cached
Mem: 62 2 60 0 0 0
-/+ buffers/cache: 1 61
Swap: 0 0 0
However, the search process eats all memory and killed by OOM.
Could someone explain to me why the search process needs so many memory for one simple query with the data for ONE(!) day?
4.5 million events should fit in 64GB, but it also depends on the sizes of the events. Perhaps you could try exporting 12 hours at a time into separate files and then concatenate the files.
I'm sorry, but it is unsuitable. Why Splunk is collect data into memory instead of flush data to disk directly? I don't need something externally hard, just grab data.
What if I need to export data for one year? Should I do exports by an hour or maybe by the minute? It is not a problem, I can write a script, but it is an inappropriate solution.
This is a strong bug in my opinion.