AppD Archive

High CPU after installing the java agent

CommunityUser
Splunk Employee
Splunk Employee

Hi Support,

a prospect, SoftSolutions! is using a self service trial. They install the java agent on the engine tier of the application and all works fine.

Then they install a second java agent on a client of the application, the client is for trading and quoting, so thousands of transaction per second. The prospect told me that the agent is installed with out-of-the-box configurations.

The problem is that CPU, when the agent is there, is very high and the client application is not working.

attached log files of the agent.

Let me know some suggestions.

Regards

Alamo

0 Karma

Arun_Dasetty
Super Champion

Hi Alamo,


We do not see any suspicious info in logs provided and please be sure that the logs provided does not span over considerable period to debug the issue,
we see only agent logging for only 2-3 minutes in both the agent logs as listed below which is evident from start and end logs snippet from agent.logs:

=======================================================================
[Thread-0] 29 gen 2014 17:53:07,336  INFO AgentKernel - Starting Java Agent at Wed Jan 29 17:53:07 CET 2014 ...
..
[AD Thread Pool-Global1] 29 gen 2014 17:53:37,699  INFO DynamicServiceMBeanManager - MBean AppDynamics:type=DynamicServiceManager is registered

[Thread-0] 29 gen 2014 17:43:04,977  INFO AgentKernel - Starting Java Agent at Wed Jan 29 17:43:04 CET 2014 ...
..
[AD Thread Pool-Global1] 29 gen 2014 17:46:14,975  INFO BusinessTransactionRegistry - Sending transactions to register [BusinessTransaction{id=0, name=null, entryPointType=JMS, internalName=TopicMessageConsumerContainer:, componentId=60586 ,applicationId=0, applicationComponentName=null, matchCriteria=MatchCriteria{matchRule=null, namingConfig=DiscoveryNamingConfig{namingSchemeType='destination-name', properties=[]}}, createdOn=null, background=false, configuration=null, createdNodeId=0, enabledForEUM=false, eumAutoEnablePossible=null}, BusinessTransaction{id=0, name=null, entryPointType=JMS, internalName=TopicMessageConsumerContainer.onMessage, componentId=60586 ,applicationId=0, applicationComponentName=null, matchCriteria=MatchCriteria{matchRule=null, namingConfig=DiscoveryNamingConfig{namingSchemeType='destination-name', properties=[]}}, createdOn=null, background=false, configuration=null, createdNodeId=0, enabledForEUM=false, eumAutoEnablePossible=null}]
======================================================================

As we see more async interceptors on packages it/softsolutions and bsh/ it is worth to give a try the following to see if that makes any difference:
Option-1: stop the jvm and open the C:\AppDynamics\conf\app-agent-config.xml file under <fork-config> section add the following line and save the changes and restart the jvm and monitor cpu usage
with new load:

<!-- exclude sofsolutions packages -->
<excludes filter-type="STARTSWITH" filter-value="it.softsolutions/"/>
<excludes filter-type="STARTSWITH" filter-value="bsh/"/>

from BCT.logs:
===============
Applying method interceptor async.handoffAsyncHandOffExecutionTracker at it/softsolutions/nexrates/console/gui/core/Installer$1.run (()V) id:93
Applying method interceptor async.handoffAsyncHandOffIdentificationTracker at javolution/util/FastMap$7.<init> ((Ljavolution/util/FastMap;)V) id:94
...
Applying method interceptor async.handoffAsyncHandOffIdentificationTracker at bsh/Interpreter.<init>
================

Option-2: disable agent from node dashboard -> Agents section for agent node "nexRatesConsoleGIO" in controller UI under application "nexRatesServerApp"
and monitor CPU

Option-3: It would be great if you increase the jvm heap memory and permgen size and monitor CPU after jvm restart with new load,as we see outofmemory application error in agent logs:
[AD Thread Pool-Global0] 29 gen 2014 17:45:36,354  INFO ErrorProcessor - Sending ADDs to register [ApplicationDiagnosticData{key='java.lang.OutOfMemoryError:', name=OutOfMemoryError, diagnosticType=ERROR, configEntities=null, summary='java.lang.OutOfMemoryError'}]

Please keep us posted how it goes after making suggested changes.

Regards,

Arun

0 Karma
Get Updates on the Splunk Community!

See just what you’ve been missing | Observability tracks at Splunk University

Looking to sharpen your observability skills so you can better understand how to collect and analyze data from ...

Weezer at .conf25? Say it ain’t so!

Hello Splunkers, The countdown to .conf25 is on-and we've just turned up the volume! We're thrilled to ...

How SC4S Makes Suricata Logs Ingestion Simple

Network security monitoring has become increasingly critical for organizations of all sizes. Splunk has ...