Hello.
I'm using Splunk Enterprise 7.2.6 and I'm trying to export some data from past with command below:
/path/to/splunk search "index=my_index earliest=12/02/2018:00:00:00 latest=12/03/2018:23:59:00" -output rawdata -maxout 0 -auth user:password
My servers has 64Gb RAM and it almost free:
free -mg
total used free shared buffers cached
Mem: 62 2 60 0 0 0
-/+ buffers/cache: 1 61
Swap: 0 0 0
However, the search process eats all memory and killed by OOM.
Could someone explain to me why the search process needs so many memory for one simple query with the data for ONE(!) day?
You can add SWAP memory to prevent OOM for huge search/exports.
How much data is in that one day?
@richgalloway You surprised me by this question. I only can check the day after and it is 4510739 strings. So, I would assume the amount of data on the problem day is the same.
4.5
million events should fit in 64GB, but it also depends on the sizes of the events. Perhaps you could try exporting 12 hours at a time into separate files and then concatenate the files.
I'm sorry, but it is unsuitable. Why Splunk is collect data into memory instead of flush data to disk directly? I don't need something externally hard, just grab data.
What if I need to export data for one year? Should I do exports by an hour or maybe by the minute? It is not a problem, I can write a script, but it is an inappropriate solution.
This is a strong bug in my opinion.
I agree and understand it's not the best solution, but would you rather stand on principle or get the job done?
You right. it is only way to manage the issue.