free -m
As a result of this command, we found that the memory usage is about 3% lower, but the swap memory is 100% in use.
The same thing happens when you restart Splunk shortly after.
Does anyone know the cause of the phenomenon and how to solve it
The server environment is as follows.
OS: CentOS 7
Splunk Enterprise 9.0.4
Wait. You're mixing different things here.
If you have very low memory usage and there are still some pages swapped out it means that you have huge chunks of process memory which has not been used for a long time (for example, a daemon which is just sleeping for most of the time and most of its code and data is never accessed). In that case it's indeed better for the OS to swap it out and use the freed memory pages for cache/buffers. One big caveat though - if at some point the process requests access to those swapped out pages the kernel will start loading them from the disk. If it's only at the price of dropping some cache pages probably noone will even notice. But if it needs to swap out some active memory pages... that might get ugly.
And even with modern systems with NVMe disks (which are not that widespread yet) RAM access is way faster than disk transfer.
Splunk as a software running on top of the OS doesn't have any privilege to choose between the swap and real memory as it's purely decided by the OS.
There used to be many swap issues in Linux which could be better addressed or explained by the Vendor Support.
Frequent swap access could impact the Splunk performance negatively - you may want to control 'swappiness' with the help of OS admin.
https://www.techtarget.com/searchdatacenter/definition/Linux-swappiness FYI.
hello,
also we have the problem with increased SWAP
OS: RHEL 9.5
RAM: 32GB
SWAP: 16GB
SPLUNK: 9.4.1
# free -m
total used free shared buff/cache available
Mem: 31837 6853 358 0 24953 24984
Swap: 16383 16292 91
I couldn't find any other cause and solution.
I don't have any problems with Splunk operations, so I'm just using it..
I totally agree with @PickleRick you should disable your swap at least temporarily and after you have confirmed that everything is working and/or fix the root cause for swap usage then remove it permanently. When you have dedicated servers for splunk those should sized correctly to run your normal workload.
Through the top command, we found that the Splunkd process is using 100% of the swap space. However, it is impossible to determine the root cause because there is no way to check exactly what kind of operation the swap space is using. Do you know anything about a case that solved the problem of using 100% of the swap space?
Thank you.
Let me tell you about the exact phenomenon.
Splunk Enterprise is currently running two separate categories: search header server and index server.
The server environment is as follows.
OS version: CentOS 7
Splunk version: 9.0.4
ram: 256G
swap: 16G
I'm using about 5% of memory on average, but I'm using 100% of swaps.
OK. YMMV but with 256G of RAM I would definitely _not_ want any swap at all.
I know that:
1) Many Linux installers create swap space by default whether it's needed or not.
2) There are still some myths back from... the eighties(?) circulating around that "you should have twice as much swap as RAM". In your case that would be 0.5TB of swap which - as you will surely admit - would be completely ridiculous.
But every use case is different so in Splunk's case I think it's better to fail early and restart than to get your load sky high and wait to crash anyway.
There are two things to tackle here.
One is general memory usage. It can be caused by many different things depending on the component and its activity. But most typically the more searching you do, the bigger memory usage you cause.
Another thing is swap. I'm not a big fan of swap use in modern scenarios. OK, some small amount of swap to let the system move some "running but not quite" daemons out of the way might be useful but nothing more. If your main task (in your case - splunkd) starts swapping out, you're getting into a loop where the system cannot keep up with requests for memory so it starts swapping so it cannot allocate any more memory so it wants to swap some more... I prefer my systems with little or no swap at all. It's very often better for the user to simply kill the process due to memory exhaustion and restart it than to wait for it to crash badly because of the same reason but after a long time of heavy I/O use possibly affecting other components should you be using shared storage infrastructure.
Hi @khj
Typically your server will use swap if there is not enough RAM available on the system for the processes that are running.
Please could you let us know how much RAM the server has, and how much is typically being used? It could be that it is under-spec'd for the ES role.
Please let me know how you get on and consider adding karma to this or any other answer if it has helped.
Regards
Will