Monitoring Splunk

mvexpand gives "mvexpand output will be truncated due to excessive memory usage"

marcokrueger
Path Finder

I give my splunk 50GB Mem with
max_mem_usage_mb = 50480
in the limits.conf
but splunk 5.0.3 gives me a "mvexpand output will be truncated due to excessive memory usage".
THe job inspector shows that the incoming data are a few 10 MB.

Miss I a hidden config-option?

Best regards Marco

1 Solution

marcokrueger
Path Finder

Hi Frank,
in my case I have solved it. I suggest to remove alle fields you don't need anymore, before you call the mvexpand, like ... | fields dns, ip, record | fields - _raw | ...
Perhaps this helps.

By Marco

View solution in original post

marcokrueger
Path Finder

Hi Frank,
in my case I have solved it. I suggest to remove alle fields you don't need anymore, before you call the mvexpand, like ... | fields dns, ip, record | fields - _raw | ...
Perhaps this helps.

By Marco

ClubMed
Path Finder

Wow.

For all my queries, I had been using the following fields command under the assumption it did drop _raw.

 

| fields _time, xxx, yyy, zzz, ....

 

 

Then one day I started a large mvexpand and ran into memory limit.

My thought upon seeing this was 'Huh? Well, worth a try I guess.'

 

| fields _time, xxx, yyy, zzz, ....
| fields - _raw

 



Boom, mvexpand completes successfully. The heck? It actually cut the search time in half too.

0 Karma

amckinnie_splun
Splunk Employee
Splunk Employee

still works in 2020

0 Karma

gjanders
SplunkTrust
SplunkTrust

Another workaround is to use the "by" clause of a stats command to split a multivalued field into its values, it doesn't have the memory issue that mvexpand has...

to4kawa
Ultra Champion
|makeresults count=10000
|streamstats count as t
|stats values(t) as multivalue 
|fields multivalue 
`comment("multivalue extract without mvexpand")`
|stats count by multivalue 
|table multivalue 

All multi-values must be unique values, but that's OK

LM_ACN
Engager

still Octuber 2019 🙂

0 Karma

sgundeti573
Engager

June 2019 🙂

0 Karma

balmeida
Explorer

Still a good answer in 2019.

0 Karma

rafaelvjb
Explorer

Tks man, "fields - _raw" fixed my problem

0 Karma

jstubberfield
Engager

This is still an excellent answer on 2018

0 Karma

grittonc
Contributor

Still a good answer in 2017.

0 Karma

fervin
Path Finder

Thanks, that fixed it.

0 Karma

fervin
Path Finder

Hi Marco,

I'm seeing the same behavior in 5.0.3. Did you ever find a solution? I'm trying to get some info from a REST input into a lookup, and the seemingly inefficient technique in the docs for spath to combine multivalued fields bombs at exactly 700 elements. Anybody else seeing this, or have any ideas?

-Frank

EDIT - Upgrading to 5.0.4 and greatly increasing max_mem_usage_mb did not resolve the problem for me but filtering out the _raw fields as Marco suggested did. My working query:

index=rest sourcetype=dns:rest:a | head 1 
| spath output=dns path={}.name 
| spath output=ip path={}.ipv4addr
| fields - _raw 
| eval record=mvzip(dns,ip)
| fields + record
| mvexpand record | eval record = split(record,",") 
| eval dns=mvindex(record,0) | eval ip=mvindex(record,1)   
| table dns,ip
0 Karma

marcokrueger
Path Finder

Hi Frank,
another solution may be to incease the max_mem_usage_mb in your limits.conf?

You can avoid the spath in your query, by defining it under
Manager » Fields » Field aliases

best regards
Marco

0 Karma
Get Updates on the Splunk Community!

Enhance Security Visibility with Splunk Enterprise Security 7.1 through Threat ...

(view in My Videos)Struggling with alert fatigue, lack of context, and prioritization around security ...

Troubleshooting the OpenTelemetry Collector

  In this tech talk, you’ll learn how to troubleshoot the OpenTelemetry collector - from checking the ...

Adoption of Infrastructure Monitoring at Splunk

  Splunk's Growth Engineering team showcases one of their first Splunk product adoption-Splunk Infrastructure ...