Hello, dear Splunk Ninjas!
I've an issue with Search Head, it is using too much RAM. At first I thought that was our VM fault. But after troubleshooting little bit, I found out, that Splunk is using a LOT of RAM. 24 GB out of 24.8GB.
What might be the reason?
With kind regards,
Damiko
PS: I've added screenshot of "htop" command.
HI @damiko,
It's troubleshooting problem, you need to go through tree of steps based on condition to reach to the root cause and solve it.
Go to Settings > Monitoring Console > Resource Usage > Resource Usage: Instance.
Check "Physical Memory Usage by Process Class" panel chart that will tell which part of Splunk is utilizing more memory like Splunkd, search, KVstore, etc.
Based on above conclusion you need to further debug. For example search part is taking more memory then you need to check below parts to check how many searches are running and which is taking more time.
If Splunkd service is using more memory we might require to go through _internal splunkd logs to check what is going wrong.
Similarly go through different part of splunk to check what is wrong. Troubleshooting process will take some time so, be patience and carry on. Hope you will find the issue soon.
Hi there,
Which version of Splunk are you running, and does this happen by default or does it happen when you run particular searches? I'm asking about version because mvexpand as a search command has some serious memory usage issues in 7.1.x which were only fixed in 7.2.5.1
Similarly, if it's particular search that seems to eat up all the memory, we can look at that search and see why and how to improve the efficiency of it.
Hello! I'm using 7.2.4 version of Splunk.
HI @damiko,
It's troubleshooting problem, you need to go through tree of steps based on condition to reach to the root cause and solve it.
Go to Settings > Monitoring Console > Resource Usage > Resource Usage: Instance.
Check "Physical Memory Usage by Process Class" panel chart that will tell which part of Splunk is utilizing more memory like Splunkd, search, KVstore, etc.
Based on above conclusion you need to further debug. For example search part is taking more memory then you need to check below parts to check how many searches are running and which is taking more time.
If Splunkd service is using more memory we might require to go through _internal splunkd logs to check what is going wrong.
Similarly go through different part of splunk to check what is wrong. Troubleshooting process will take some time so, be patience and carry on. Hope you will find the issue soon.
@VatsalJagani I went to Settings > Monitoring Console > Resource Usage > Resource Usage: Instance.
However dashboard have "No results found" status.
Make sure there is no error messages and you have access to _internal index data.
@VatsalJagani After I've stopped "$Splunk_HOME$/bin/splunk stop" it is not responding to ./splunk start command...
Do you know why?... 😞
Ok, I've copied splunk file from .rpm file and pasted into our Splunk 🙂
I didn't get that. What do you mean? Please make sure you don't loose your splunk indexed data.
I've downloaded splunk.rpm archive, and copied bin/splunk from there to my bin/
Now I can start Splunk again.
Check htop command again, it might be resource starvation issue.
Thank you Vatsal,
Trying to navigate throw Splunk atm, it's really slow...