In few of our LPARs, we have observed a pattern that occasionally the underlying AIX process (/usr/bin/topas_nmon) called by NMON Performance Monitor for Unix and Linux Systems do not get stopped after their data collection time period is completed. Since Splunk calls nmon to collect LPAR stats every 5 minutes, the number of nmon processes keep on growing in this situation where the completed processes are not terminated. We noticed that each of the running process was consuming around 40-50MB of page space memory.
On 1st Sep, we had 300 of theses processes stuck in one of our LPAR, each consuming 40~50MB of memory totaling around 15GB of Page Space memory usage.
In other situations, the page space memory was exhausted and we lost the ability to ssh to the box and the only solution was to restart the LPAR.
Number of files start to grow up in nmon_temp directory. Under normal circumstances, there are no files in these directories.
File descriptors get exhausted: Under normal circumstances, we have 10-15 open file descriptors, but in this case, all the file descriptors available for Splunk are exhausted and we start seeing an infinite number of these messages in the splunkd.log : "Resetting fd to re-extract header"
Example of the command:
myuser 33096084 1 0 08:12:14 - 0:02 /usr/bin/topasnmon -f -T -A -d -K -L -M -P -^ -s 60 -c 5 -youtputdir=/splunkfwd/splunkforwarder/etc/apps/TA-nmon/var/nmontemp/au02qdb201teax2 -ystarttime=08:12:12,Sep13,2016
Please note that:
TA-nmon does run successfully on these boxes some times
We have another /usr/bin/topas_nmon running by infrastructure team on these boxes that runs without issues
AIX version: 6100-09-04-1441
Splunk forwarder version 6.1.3 (Universal forwarder)
TA-nmon/metadata/local.meta:version = 6.1.3, modtime = 1413521379.326427000
Is this a known behaviour? Or is there a clean up job that fails to execute occasionally? I will appreciate your help
I had this issue reported once in an AIX deployment.
The problem relies on topas-nmon indeed not terminating its process when the time to live has been reached.
The TA-nmon spawns a new process every 2 hours by default, when the previous process is expected to have been terminated.
Note that is there is very small time range of a few minutes (nmon parallel run) where 2 concurrent processes will run, to prevent gaps in performance data.
This is not an expected behaviour of topas-nmon (and I only saw this on AIX), maybe you could report this issue to IBM as topas-nmon is officially supported.
I think that for some reasons, nmon processes encounter an unexpected situation and the internal timer gets stuck.
The additional process you run on your own may not be affected as I guess you it run over a large snapshot value to cover 24 hours, and the situation is different.
Any way, this is definitively not a good situation, the TA-nmon and by the way the Splunk forwarder gets victim of that issue, and generates the situation you described.
As far as I know, this is hopefully a very rare case and I do think that we could easily be protected from it.
I can get implement some safety conditions that will prevent the TA-nmon to spawn new processes when more than x processes are found.
In the worst case and if such unexpected situation is encountered, no new process would be launched.
I will be more than happy to exchange with you on this, you can contact me on the app page on Splunk Base.
I just re-viewed this post and wanted to give an update.
Since the 1.3.x branch of the TA-nmon, this is issue cannot happen anymore.
Due to the changes I have done in the way the TA starts nmon binaries, in the worst case and if the current process does not end, the metric collection will stop and no new process can be started as long as the current process has not been terminated.
It does solve the root issue which is that the topas-nmon in some rare and weird cases do not terminate as it should, but at least the TA-nmon will have no effect of any kind on the server.