Monitoring Splunk

Number of appserver.py processes increasing, causing OOM

hrawat_splunk
Splunk Employee
Splunk Employee

Search Head appears to have a rogue python  process ( appserver.py) that slowly eats away all memory on the system, then eventually causes an OOM, which requires a manual restart of splunkd, then the issue starts slowly creeping up to happen again.

Labels (1)
0 Karma
1 Solution

hrawat_splunk
Splunk Employee
Splunk Employee

Due to some issue with proper cleanup of idle processes, number of python process ( appserver.py) running on the system constantly grow. Thus due to  systemwide memory growth,  these stale processes, eventually causes an OOM.

Run following search to find if any search head is impacted by this issue and what % of total system memory these stale processes running more than 24 hours. If these processes using more than 15% of total system memory, then run script to kill stales processes.

 

index=_introspection host=<all search heads>  appserver.py data.elapsed > 86400
| dedup host, data.pid
| stats dc(data.pid) as cnt sum("data.pct_memory") AS appserver_memory_used by  host
| sort - appserver_memory_used

 



On linux/unix you can use following script to kill stale processes and reclaim memory.

 

kill -TERM  $(ps -eo etimes,pid,cmd | awk '{if ( $1 >= 86400) print $2 " " $4 }' |grep appserver.py | awk '{print $1}')

 



View solution in original post

0 Karma

waechtler_amaso
Explorer

I see this behaviour, too, also for another process coming from the ITSI app:

  /opt/splunk/etc/apps/SA-ITOA/bin/command_health_monitor.py

Besides killing processes or restarting splunk as a workaround, do you know whether there are efforts to finally resolve this bug?

Thanks, Jan

 

0 Karma

hrawat_splunk
Splunk Employee
Splunk Employee

Splunk 9.3.0 has the fix.

hrawat_splunk
Splunk Employee
Splunk Employee

Due to some issue with proper cleanup of idle processes, number of python process ( appserver.py) running on the system constantly grow. Thus due to  systemwide memory growth,  these stale processes, eventually causes an OOM.

Run following search to find if any search head is impacted by this issue and what % of total system memory these stale processes running more than 24 hours. If these processes using more than 15% of total system memory, then run script to kill stales processes.

 

index=_introspection host=<all search heads>  appserver.py data.elapsed > 86400
| dedup host, data.pid
| stats dc(data.pid) as cnt sum("data.pct_memory") AS appserver_memory_used by  host
| sort - appserver_memory_used

 



On linux/unix you can use following script to kill stale processes and reclaim memory.

 

kill -TERM  $(ps -eo etimes,pid,cmd | awk '{if ( $1 >= 86400) print $2 " " $4 }' |grep appserver.py | awk '{print $1}')

 



0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...