Reporting

CLI Splunk search huge memory amount

Hello.

I'm using Splunk Enterprise 7.2.6 and I'm trying to export some data from past with command below:

/path/to/splunk search "index=my_index earliest=12/02/2018:00:00:00 latest=12/03/2018:23:59:00" -output rawdata -maxout 0 -auth user:password

My servers has 64Gb RAM and it almost free:

free -mg
total used free shared buffers cached
Mem: 62 2 60 0 0 0
-/+ buffers/cache: 1 61
Swap: 0 0 0

However, the search process eats all memory and killed by OOM.

Could someone explain to me why the search process needs so many memory for one simple query with the data for ONE(!) day?

SplunkTrust
SplunkTrust

How much data is in that one day?

---
If this reply helps you, an upvote would be appreciated.
0 Karma

@richgalloway You surprised me by this question. I only can check the day after and it is 4510739 strings. So, I would assume the amount of data on the problem day is the same.

0 Karma

SplunkTrust
SplunkTrust

4.5 million events should fit in 64GB, but it also depends on the sizes of the events. Perhaps you could try exporting 12 hours at a time into separate files and then concatenate the files.

---
If this reply helps you, an upvote would be appreciated.
0 Karma

I'm sorry, but it is unsuitable. Why Splunk is collect data into memory instead of flush data to disk directly? I don't need something externally hard, just grab data.
What if I need to export data for one year? Should I do exports by an hour or maybe by the minute? It is not a problem, I can write a script, but it is an inappropriate solution.
This is a strong bug in my opinion.

SplunkTrust
SplunkTrust

I agree and understand it's not the best solution, but would you rather stand on principle or get the job done?

---
If this reply helps you, an upvote would be appreciated.
0 Karma

You right. it is only way to manage the issue.

0 Karma