Hello
Trying to figure out why my UF is consuming 37GB of swap space
Ran some commands and here are the results
[splunk@server07 ~]$ free -h
total used free shared buffers cached
Mem: 94G 93G 1.3G 46G 252M 49G
-/+ buffers/cache: 43G 51G
Swap: 57G 53G 4.2G
The swap calcultaions by splunk process:
[splunk@server7 ~]$ grep --color VmSwap /proc/100427/status
VmSwap: 4180 kB
[splunk@server7 ~]$ grep --color VmSwap /proc/100423/status
VmSwap: 37438788 kB
Anyone have any ideas why its consuming so much swap?
This doesn't seem normal
Thanks for the thoughts!
Went ahead and uploaded a case to support. Will close this. Will update once support gives an answer.
Thanks everyone
Went ahead and uploaded a case to support. Will close this. Will update once support gives an answer.
Thanks everyone
Looks like it was some sort of cache issue. Rolled the Splunk service and it released the swap space.
Thanks!
This is too high, i would recommend you to raise a support request to check why is it consuming 37GB of space.
this link give you some understanding https://docs.splunk.com/Documentation/Splunk/7.2.4/Troubleshooting/Troubleshootmemoryusage
One thread about such cases - Why does an AIX 6.5.2 forwarder have high swap/memory and cpu consumption?
All experts say -
-- High numbers of monitored files can cause such behavior .
Is there a way, other than using the localhost:8089 endpoint as descibed in the link you attached, to find the number of monitored files? The 8089 endpoint on the UF is disabled.
Like I said CPU isnt spiking
What is it doing? How many apps are installed on it? What does the log tail show? Any errors? Is it chewing up CPU?
The biggest thing there is the Nix app. other than that not many apps. Splunkd is using on average 30% of a single core. So not something I see as an issue. Tailing the splunkd log only shows connections to indexers. NO errors.