All TKB Articles in Learn Splunk

Learn Splunk

All TKB Articles in Learn Splunk

Question Our application uses JTurbo JDBC driver to connect to a MS SQL Database. We can not see the JDBC calls to the MS SQL Server database backend. How can we fix this? Answer Refer to this... See more...
Question Our application uses JTurbo JDBC driver to connect to a MS SQL Database. We can not see the JDBC calls to the MS SQL Server database backend. How can we fix this? Answer Refer to this article  Using Node Properties to Detect JDBC Backends and try using the following JDBC node properties and values: jdbc-connections=com.ashna.jturbo.driver.c jdbc-statements=com.ashna.jturbo.driver.w jdbc-prepared-statements=com.ashna.jturbo.driver.x jdbc-callable-statements=com.ashna.jturbo.driver.y Add all the above JDBC properties to the respective node and apply load to the application.
This article represents the latest collected set of wisdom from AppDynamics field engineers. For information on how to configure the Java Agent with TIBCO, see Configure the Java Agent for TIBCO... See more...
This article represents the latest collected set of wisdom from AppDynamics field engineers. For information on how to configure the Java Agent with TIBCO, see Configure the Java Agent for TIBCO BusinessWorks   Required Agent version BusinessWorks support requires version 3.9.6 or higher of the AppDynamics Java agent.    To get visibility of database calls AppDynamics provides agent node properties that can be used to detect and instrument JDBC backends that are not detected automatically.  The node properties are described here: Knowledge-Base: Using Node Properties to Detect JDBC Backends. You access the Node Dashboard to edit the value of these properties. Use these steps: Edit Registered Node Property. Tip: Separate multiple class names using a ',' (comma) as a separator in the agent node properties configuration. For Tibco,  use the following JDBC node property values:  jdbc-statements: value="tibcosoftwareinc.jdbc.base.BaseStatement,tibcosoftwareinc.jdbc.base.BaseCallableStatement,tibcosoftwareinc.jdbcx.base.BaseStatementWrapper,tibcosoftwareinc.jdbc.base.BasePreparedStatementPoolable" jdbc-connection: value="tibcosoftwareinc.jdbcx.base.BaseConnectionWrapper,tibcosoftwareinc.jdbc.base.BaseConnection jdbc-prepared-statements: value="tibcosoftwareinc.jdbc.base.BaseStatement,tibcosoftwareinc.jdbc.base.BaseCallableStatement,tibcosoftwareinc.jdbcx.base.BasePreparedStatementWrapper,tibcosoftwareinc.jdbc.base.BasePreparedStatementPoolable" jdbc-callable-statements: value="tibcosoftwareinc.jdbc.base.BaseCallableStatement,tibcosoftwareinc.jdbcx.base.BaseCallableStatementWrapper,tibcosoftwareinc.jdbc.base.BaseCallableStatementPoolable"/>  To track jobs through BusinessWorks The following configuration is provided out-of-the-box in AppDynamics release 3.9.7+ and 4.0+. You do  not need to change the app-agent-config.xml file unless your agent is 3.9.6 or earlier or you retained your existing, pre-4.0 app-agent-config.xml file after upgrading to 4.0.   <<<< This should be rare. For 3.9.6 or earlier agent, or if you retained your existing, pre-4.0 app-agent-config.xml file after upgrading to 4.0, you need to revise the configuration manually as shown here: 1. Locate and open for the app-agent-config.xml file for editing. 2. Make these changes in app-agent-config.xml:     a. Add the following to the  <fork-config> element: <!-- special config for tibco --> <job> <match-class type="matches-class"> <name filter-type="EQUALS" filter-value="com.tibco.pe.core.Job"/> </match-class> <match-method> <name filter-type="EQUALS" filter-value="k"/> </match-method> <name-config operation="" type="4"/> <retention-config type="1" operation="1.getTaskSize()"/> </job>     b. Uncomment this additional node property to allow detection of incoming JMS messages: <!-- uncomment the following to enable transaction correlation for jms .receive() call, default value is false --> <property name="enable-jms-receive-correlation" value="true"/>      c. Add the following entry to the <fork-config> element of app-agent-config.xml: <excludes filter-type="STARTSWITH" filter-value="com.tibco.plugin.share.jms.impl.JMSReceiver$SessionController”/> 3. Add the attached custom-interceptors.xml file to the <app-agent-install>/conf/ directory. To split SOAP/HTTP invocations by the SOAP action Many incoming SOAP actions that are sent via HTTP requests hit a single URL (for example, the URLs start with something like /BusinessServices/WebGateway) so you need a POJO split rule as shown in the two following screen shots. Match rule: Split rule: (see below for the values that are not showing completely in the screen shot). POJO Method Call values: Class Name = com.tibco.bw.service.binding.soap.http.SoapHttpTransportApplication Method Name = processMessage Method Call Chain = getTransportContext().getSoapAction() results in the following XML <custom-match-point-definition transaction-entry-point-type="SERVLET"> <name>get soap action</name> <custom-business-transaction-name>get soap action</custom-business-transaction-name> <background>false</background> <enabled>true</enabled> <match-rule> <servlet-rule> <enabled>true</enabled> <priority>50</priority> <uri filter-type="STARTSWITH" filter-value="/BusinessServices/WebGateway"/> <properties/> <generic-method-config> <class-name>com.tibco.bw.service.binding.soap.http.SoapHttpTransportApplication</class-name> <method-config> <name>processMessage</name> <param-length>1</param-length> <param-index>0</param-index> <param-getter>getTransportContext().getSoapAction()</param-getter> </method-config> </generic-method-config> </servlet-rule> </match-rule> </custom-match-point-definition>   If things don't seem to be working In this case, the TIBCO code is obfuscated.  You may be dealing with a different TIBCO BW release, and some of the configuration referencing TIBCO classes may need to change. Contact AppDynamics Support for additional assistance. Also see: Tibco ActiveMatrix BusinessWorks Service Engine Settings
For JBoss EAP AppDynamics supports the underlying JVM and framework versions. Our product documentation has specific installation instructions for JBoss (open-source versions) but those instructions ... See more...
For JBoss EAP AppDynamics supports the underlying JVM and framework versions. Our product documentation has specific installation instructions for JBoss (open-source versions) but those instructions do not necessarily work for JBoss EAP. JBoss EAP (Enterprise Application Platform) is Red Hat's branded and supported version of JBoss AS 7. JBoss AS 7 is already one of the more complex agent integrations, requiring modifications to two files in standalone mode: standalone.conf and standalone.sh. JBoss EAP complicates this integration by having a different layout which changes the location, names and content of these jar files. Documented Settings You can find settings in the public product documentation at JBoss and Wildfly Startup Settings for the following: Linux environment for JBoss EAP 6.11, EAP 6.2.0, and JBoss AS 7.0.x (standalone) RHEL JBoss EAP 6.x, JBoss AS 7.0.x, JBoss 8 (Domain Mode) JBoss 7.2 (standalone) Wildfly 8 (JBoss - the current steps for JBoss 7.x/EAP 6.x work for Wildfly 8. JBoss EAP 6.1 (Standalone) There is a JBoss EAP 6.1.0 bug that prevents our agent from working. This is fixed in JBoss EAP 6.1.1. JBoss EAP 6.1.0 bug reference (969530) under the 6.1.1 release notes https://access.redhat.com/site/documentation/en-US/JBoss_Enterprise_Application_Platform/6.1/html-single/6.1.1_Release_Notes/index.html#sect-changes. To get around this issue, consider an upgrade to JBoss EAP 6.1.1. The workaround at Step 2 under Quick Install here: JBoss and Wildfly Startup Settings may also work. Troubleshooting JBoss configuration The standard recipe for sorting out JBoss configuration is the following: 1. Don't add "-XX:-UseSplitVerifier" unless you need to. This is to workaround the case where your class files don't have a StackMap table such as for use with a debugger. http://stackoverflow.com/questions/15253173/how-safe-is-it-to-use-xx-usesplitverifier   2. JBoss EAP 6.10.0 is an OSGI server which changed classloading. It is now a modular classloader. The option -Djboss.modules.system.pkgs=org.jboss.byteman,com.singularity adds these classes to the OSGI container classpath: org.jboss.byteman    This is for debugging. See http://byteman.jboss.org/ You probably don't need this.   3. Setting the following logging options can be problematic. For example: org.jboss.logmanager in - Djboss.modules.system.pkgs  -Djava.util.logging.manager = org.jboss.logmanager.LogManager -Xbootclasspath /p:/var/jboss-eap-6.1.0/jboss-eap-6.1/modules/system/layers/base/org/jboss/logmanager/main/jboss-logmanager-1.4.0.Final-redhat-1.jar ... In theory these seems to be influenced by the application. If you need to set these, then you need to get the right modules associated in the bootclasspath so check the versions. They change. In practice it seems you can often get away without specifying them, saving a world of pain.   In JBoss EAP 6.1.0, this  seems to trigger a JBoss bug which cannot be worked around using the bootclasspath: https://bugzilla.redhat.com/show_bug.cgi?id=969530 - also referenced earlier in this article.  
Confirming Backend Discovery and Instrumentation This article describes the backend discovery life cycle. Understanding these details can help you debug instrumentation issues of backends that are ... See more...
Confirming Backend Discovery and Instrumentation This article describes the backend discovery life cycle. Understanding these details can help you debug instrumentation issues of backends that are not automatically discovered. The phases referred to in this article refer to the phases in the discovery life cycle. See AppDynamics Auto-Discovery Life Cycle. Typical examples of backends are databases instance and remote services, such as web services, JMS clients and message queues, HTTP backends and so on. Backends are born from exit points. An exit point call from an instrumented node to an uninstrumented node or other service results in backend discovery. The activity of a supported remote service or database is identified by its type and related properties. Each type of backend has a list of properties associated with it. The properties are a list of key-value pairs. For example, for a JDBC database, the following screen capture shows the properties that can be used to uniquely identify the JDBC database. Phase One: Was Backend Discovery Successful? Determine if the exit point was successfully instrumented: 1. Was the backend limit reached? If you see backends, but some are missing, check that the backend limit was not reached. There is a limit of 300 per application. If you have hit the backend limit, you may need to revise the backend discovery configuration. See All Other Traffic Backends. 2. If the backend limit was not reached, is your backend one that is discovered by default? To determine this, check the documentation for your app agent type. Use the docs version that corresponds to your agent version:  Latest: Supported Environments and Versions 3.9 Supported Environments and Versions  3. If your backend is not on the supported list, you can configure a custom exit point. See Configure Custom Exit Points. 4.  If you are using a supported backend and you are not seeing what you expect, find the BCT log (Byte Code Transformer log) and review the entries for exit point interceptors starting with the text exit.<your_framework>. For example, exit.jdbc for a JDBC database or exit.jms for a JMS service. See Agent Log Files for details on requesting the agent logs.       a. If you find evidence in the BCT log that the exit point was instrumented, check for exit point exceptions in the agent logs. If you find exceptions for your exit point, contact support for additional assistance. If you do not find any exceptions, move on to phase 2, backend registration.       b. If your exit point was not instrumented,  and you know the exit point that you want to instrument, configure a custom exit point. If you do not know the exit point to instrument, look at the Call Graph to find the method and configure the custom exit point. Phase Two: Backend Registration When a backend is successfully discovered, it is assigned a name based on the rules for the exit point type and any custom backend detection configuration (if applicable). Every backend is stored in the backend table in the controller database. The agent detects when the backend is hit and sends a registration request to the controller. Registration is the process where each object is assigned an ID by the controller. After the ID is assigned, the communication between the agent and controller uses the ID to identify exactly which objects are being reported and stored in the controller database. You can review the backend registration log entries in the REST Log. Confirm successful backend registration: 1. Generate and retrieve the REST log. For details on how to do this, see Request Agent Log Files. 2. Look for the backend registration request log entries. In the following request, you can see three backends were discovered and sent to the controller for registration.  <unregistered-backends> Size : 3 <UnresolvedBackendCallInfo [resolutionInfo=NodeResolutionInfo[exitPointType=JMS, properties=[Name:DESTINATION_TYPE, Value:QUEUE, Name:DESTINATION_NAME, Value:OrderQueue, Name:VENDOR, Value:Active MQ]], metaInfo=[], applicationId=0, createdOn=0, displayName=Active MQ-OrderQueue, visualizationProperties=null, applicationComponentNodeId=0, applicationComponentId=0]> <UnresolvedBackendCallInfo [resolutionInfo=NodeResolutionInfo[exitPointType=WEB_SERVICE, properties=[Name:SERVICE_NAME, Value:OrderService]], metaInfo=[], applicationId=0, createdOn=0, displayName=OrderService, visualizationProperties=null, applicationComponentNodeId=0, applicationComponentId=0]> <UnresolvedBackendCallInfo [resolutionInfo=NodeResolutionInfo[exitPointType=JDBC, properties=[Name:HOST, Value:LOCALHOST, Name:PORT, Value:3306, Name:SCHEMA, Value:APPDY, Name:MAJOR_VERSION, Value:5.5.16-log, Name:URL, Value:jdbc:mysql://localhost:3306/appdy, Name:VENDOR, Value:MySQL DB]], metaInfo=[], applicationId=0, createdOn=0, displayName=APPDY-MySQL DB, visualizationProperties=null, applicationComponentNodeId=0, applicationComponentId=0]> <unregistered-backends/> 3. Look for the corresponding backend registration responses.  In the following response you can see that the JMS exit point has an id=12 and is named  OrderQueue. ID 15 was assigned to the MySQL JDBC backend.  <resolved-backend-calls> <12>::<NodeResolutionInfo[exitPointType=JMS, properties=[Name:DESTINATION_NAME, Value:OrderQueue, Name:DESTINATION_TYPE, Value:QUEUE, Name:VENDOR, Value:Active MQ]]> <14>::<NodeResolutionInfo[exitPointType=WEB_SERVICE, properties=[Name:SERVICE_NAME, Value:OrderService]]> <15>::<NodeResolutionInfo[exitPointType=JDBC, properties=[Name:HOST, Value:LOCALHOST, Name:MAJOR_VERSION, Value:5.5.16-log, Name:PORT, Value:3306, Name:SCHEMA, Value:APPDY, Name:URL, Value:jdbc:mysql://localhost:3306/appdy, Name:VENDOR, Value:MySQL DB]]> ... <resolved-backend-calls/> 4. If you do not see successful registration request and response, look in the Controller Logs for exceptions related to backend registration. Note: If you exceed the backend limit, you see log messages similar to the following: AD Thread Pool-Global465] 23 Dec 2014 10:36:16,587 INFO BTOverflowCounter - AD Thread Pool-Global465] 23 Dec 2014 10:36:16,587 INFO BTOverflowCounter - DroppedBackend{key='https://rxservice.company.com/RxService/Account MgmtService', count=35, identifyingProperties={SERVICE_NAME=AccountMgmtService}, exitType=WEB_SERVICE} Phase Three: Metrics Registration After successful backend registration, the agent keeps a backend map in memory with the ID, type, and name for each backend. The next time the backend is hit, the agent is ready to report metrics. A similar process of request and response between the agent and controller occurs and the controller assigns each metric an ID. Was backend metric registration successful? There are three types of backend metrics: Business transaction (BT) metrics - aggregate metrics for the specific BT for the specified backend. Tier metrics - aggregate metrics across the tier for the specified backend. Backend metrics - overall aggregate metrics for the backend Use the following steps to confirm metric registration: 1. Generate and retrieve the REST log. For details on how to do this, see Request Agent Log Files. 2. Look for log entries showing metric registration. For example to find metrics related to the MySQL database registered in the previous example (ID=15), a        search of the REST log for the string "[UNRESOLVED][15]" results in the following types of log entries: <metric time-rollup-type="AVERAGE" name="BTM|BTs|BT:80|Component:2|Exit Call:JDBC|To:{[UNRESOLVED][15]}|Errors per Minute" hole-fill-type="RATE_COUNTER" cluster-rollup-type="COLLECTIVE"/> <metric time-rollup-type="AVERAGE" name="BTM|BTs|BT:81|Component:2|Exit Call:JDBC|To:{[UNRESOLVED][15]}|Average Response Time (ms)" hole-fill-type="REGULAR_COUNTER" cluster-rollup-type="INDIVIDUAL"/> <metric time-rollup-type="AVERAGE" name="BTM|BTs|BT:82|Component:2|Exit Call:JDBC|To:{[UNRESOLVED][15]}|Calls per Minute" hole-fill-type="RATE_COUNTER" cluster-rollup-type="COLLECTIVE"/> <metric time-rollup-type="AVERAGE" name="BTM|BTs|BT:80|Component:2|Exit Call:JDBC|To:{[UNRESOLVED][15]}|Average Response Time (ms)" hole-fill-type="REGULAR_COUNTER" cluster-rollup-type="INDIVIDUAL"/> <metric time-rollup-type="AVERAGE" name="BTM|Application Summary|Component:2|Exit Call:JDBC|To:{[UNRESOLVED][15]}|Average Response Time (ms)" hole-fill-type="REGULAR_COUNTER" cluster-rollup-type="INDIVIDUAL"/> <metric time-rollup-type="AVERAGE" name="BTM|Backends|Component:{[UNRESOLVED][15]}|Average Response Time (ms)" hole-fill-type="REGULAR_COUNTER" cluster-rollup-type="INDIVIDUAL"/> <metric time-rollup-type="AVERAGE" name="BTM|BTs|BT:80|Component:2|Exit Call:JDBC|To:{[UNRESOLVED][15]}|Calls per Minute" hole-fill-type="RATE_COUNTER" cluster-rollup-type="COLLECTIVE"/> <metric time-rollup-type="AVERAGE" name="BTM|Application Summary|Component:2|Exit Call:JDBC|To:{[UNRESOLVED][15]}|Calls per Minute" hole-fill-type="RATE_COUNTER" cluster-rollup-type="COLLECTIVE"/> <metric time-rollup-type="AVERAGE" name="BTM|Backends|Component:{[UNRESOLVED][15]}|Calls per Minute" hole-fill-type="RATE_COUNTER" cluster-rollup-type="COLLECTIVE"/> The metrics with names such as "BTM|BTs|BT:80|...", "BTM|BTs|BT:81|...," and "BTM|BTs|BT:82|..." are BT backend metrics. You may recognize the BT ID numbers, 80,81, and 82. The metrics with names such as "BTM|Application Summary..." are tier level metrics for this backend. The metrics with names such as "BTM|Backends|... " are aggregate backend metrics. 3.  Look for the corresponding metric registration response entries. In the following response you can see the IDs assigned to the metrics. For example, the first line in the example shows ID=1925 is assigned to the "Errors per Minute" metric for BT 80 on this MySQL backend (UNRESOLVED 15). <metric id="1925" name="BTM|BTs|BT:80|Component:2|Exit Call:JDBC|To:{[UNRESOLVED][15]}|Errors per Minute"/> <metric id="1936" name="BTM|BTs|BT:81|Component:2|Exit Call:JDBC|To:{[UNRESOLVED][15]}|Average Response Time (ms)"/> <metric id="1959" name="BTM|BTs|BT:82|Component:2|Exit Call:JDBC|To:{[UNRESOLVED][15]}|Calls per Minute"/> <metric id="1967" name="BTM|BTs|BT:80|Component:2|Exit Call:JDBC|To:{[UNRESOLVED][15]}|Average Response Time (ms)"/> <metric id="1935" name="BTM|Application Summary|Component:2|Exit Call:JDBC|To:{[UNRESOLVED][15]}|Average Response Time (ms)"/> <metric id="2015" name="BTM|Application Summary|Component:2|Exit Call:JDBC|To:{[UNRESOLVED][15]}|Calls per Minute"/> <metric id="1937" name="BTM|Backends|Component:{[UNRESOLVED][15]}|Average Response Time (ms)"/>... <metric id="1991" name="BTM|BTs|BT:80|Component:2|Exit Call:JDBC|To:{[UNRESOLVED][15]}|Calls per Minute"/> <metric id="2017" name="BTM|Backends|Component:{[UNRESOLVED][15]}|Calls per Minute"/> 4. If you do not see successful request and response for the metric, look in the Controller Logs for exceptions related to backend metric registration. Phase Four: Backend Metric Reporting After successful metric registration, metrics are reported to the controller every minute. Using the metric id, you can search the REST log for the metric upload. Was backend metric reporting successful? Use the following steps to confirm backend metric upload: 1. Generate and retrieve the REST log. For details on how to do this, see Request Agent Log Files. 2. Look for log entries showing metric reporting uploads similar to the following: <metric id='1936', value[sum=177, count=499, min=0, max=7, current=1]> <metric id='1959', value[sum=160, count=1, min=160, max=160, current=160]> <metric id='1967', value[sum=157, count=518, min=0, max=4, current=0]> <metric id='1935', value[sum=611, count=688, min=0, max=79, current=1]> <metric id='2015', value[sum=1353, count=1, min=1353, max=1353, current=1353]> <metric id='1937', value[sum=285, count=1353, min=0, max=2, current=0]> <metric id='1991', value[sum=136, count=1, min=136, max=136, current=136]> <metric id='2017', value[sum=1144, count=1, min=1144, max=1144, current=1144]> 3. If you do not see successful backend metric data uploads, look in the Controller Logs for related exceptions. 
AppDynamics automatically discovers various entities that make up your web applications, such as app servers, databases, and remote services. Other activity in your application, such as errors, excep... See more...
AppDynamics automatically discovers various entities that make up your web applications, such as app servers, databases, and remote services. Other activity in your application, such as errors, exceptions, async activity, and business transactions are also tracked as AppDynamics entities. You can learn to do some of your own debugging when things don't seem to be showing you exactly what you want to see if you understand the AppDynamics entity life cycle. The discovery life cycle is essentially the same for all entities and involves several discrete steps that can be identified by log entries. The discovery life cycle contains the following four steps for all entities: Discovery The application name, tier name, and node name are passed to the agent as part of the startup process. When the agent starts up, these entities are "discovered". The discovery of a Business Transaction (BT) is about detecting where the business transaction begins and instrumenting its entry point. An exit call from an instrumented node to an uninstrumented node or other service results in backend discovery of database, message queues, web services, and other remote service layers. Error detection occurs when the agent detects an error or exception in the application. Related documentation:  AppDynamics Concepts Monitor Business Transactions Monitor Errors and Exceptions    Registration Registration is the process where each entity is assigned a unique ID by the controller. The agent sends a registration request to the controller to register the discovered entity. The controller assigns a unique id. This ID is then used to identify this entity in agent-controller communication and in the controller database. Metric Registration As load is applied to the application, the agent starts collection metrics for each entity. Each metric is reported to the controller. The controller registers the metric and assigns an ID. The ID is used in agent-controller communication. Metric Reporting After successful metric registration, the agent periodically reports metrics to the controller.  Metric Rollup For business transactions, tiers, and nodes, there is an additional step that occurs at the controller itself. That step is metrics rollup. App agents do not communicate with each other, they report metrics to the controller. The controller aggregates the metrics from all agents/nodes to present a unified overall picture in the Controller UI. Verify Each Step in the Life Cycle  Using the agent logs it is possible to verify whether each step in the life cycle has occurred successfully. 
Contents: Who would use this workflow? How to know which POJO to use? Implementation Limitations Who would use this workflow?  Note: This workflow has largely been replaced by the Li... See more...
Contents: Who would use this workflow? How to know which POJO to use? Implementation Limitations Who would use this workflow?  Note: This workflow has largely been replaced by the Live Mode feature. If you don't see expected activity in your application, you might be missing an entry point at the start of the activity. Use the find-entry-points node property to configure missing entry points for any Business Transactions (BTs) that aren't detected out-of-the-box (OOTB). This property enables additional logging about the call stack of the app's executing code. In many cases, the OOTB configuration instruments entry points so this workflow isn't usually necessary. You would only use these instructions if your code is built within a framework that isn't supported by the out-of-the-box configuration. However, we recommend using the Live Mode feature instead. How do I know which POJO to instrument? Three rules to keep in mind: Select the POJO at the earliest point possible in the call stack. Select a POJO that finishes. You are instrumenting a POJO to measure the time the business transaction takes to execute, so you must select a POJO that finishes. Select a POJO that results in a meaningful name. The entry point method determines the name of the BT. NOTE: Do not pick a method that never ends, such as a run () method that waits on a user request! See: POJO Entry Points. Implementation: Navigate to node-level agent configuration and set the find-entry-points property to true. For more information on how to edit a node-level property: Edit a Registered Node Property Apply the property either tier-level or node-level. Note: Do not turn on this property tier-level within a production environment. The find-entry-points property is meant to be used under pre-prod and non-prod environments. Once the property is applied, apply sufficient load on the application and start collecting agent logs from the Controller UI, or view them from agent installation folder. Debug logging is not needed for this property. A complete call stack from the instrumentor to the top of the thread will dump to the   BusinessTransactions<X>.log file. Example: BusinessTransactions.2017_11_27__12_54_08.0.log The BT log mixes the output of find-entry-points with the "normal" BT discovery logging. To distinguish between the two types of call stacks, identify the text after the INFO log level keyword. Entire stack trace with eligible entry point prioritized: Based on this output, the user can find eligible candidates and create custom POJO entry points.   One can create a custom POJO entry point on class com.xyz.abc.domain.pqr.Health and method run. In the above example, 0th is considered as most eligible entry priority. Based on the output, a custom entry point rule can be created on any type (ex: Servlet, EJB, Spring or POJO). More information: Custom Match Rules. Note: Setting the property value to true causes verbose logging to be output until the value is switched to false, so it is unwise to leave this property value set to true. Limitations If the application has no known exit calls, such as no HTTP, web services, JDBC, etc, then no call stacks are logged when you run find entry points. To work around this limitation, set a custom exit point to force the logging of the associated call stack, then run find entry points again. The log entry at the top of the call stack should be your custom exit point. This works best when you know a method in the business functionality or user request that you are trying to measure. See Configure Custom Exit Points. Published on 2/13/2015 Updated 2/11/19
After installing AppDynamics Database Agent, if you see error messages similar to the following code the issue may be with OS monitoring for your Database collector. 15:33:48,718 ERROR [Agent-Sche... See more...
After installing AppDynamics Database Agent, if you see error messages similar to the following code the issue may be with OS monitoring for your Database collector. 15:33:48,718 ERROR [Agent-Scheduler-1] AServerCollector:199 - Error collecting hardware metrics for server 'kpi' com.singularity.ee.agent.dbagent.collector.server.connection.ServerConnectionException: org.jinterop.dcom.common.JIException: Message not found for errorCode: 0xC0000001 at com.singularity.ee.agent.dbagent.collector.server.connection.WMIConnection.<init>(WMIConnection.java:66) The message indicates that the WMI permissions were not properly set to collect hardware metrics from the OS. Validate your WMI security permissions as described in the documentation WMI Permissions and Security.
Users who are troubleshooting scenarios where the AppDynamics *Standalone Machine Agent is not reporting metrics as expected will find causes and solutions here. The following solutions apply to ve... See more...
Users who are troubleshooting scenarios where the AppDynamics *Standalone Machine Agent is not reporting metrics as expected will find causes and solutions here. The following solutions apply to versions 4.4 and higher. Contents: Installation issues Host ID for App Server Agent and Machine Agent do not match Difficulty with machine hostname resolution Machine Agent reporting all metrics as zero Problem: Installation issues If both an App Server Agent and the Machine Agent are unzipped into the same directory, important files will be overwritten, such as the  log4j.xml  file. Solution: When unzipping the App Agent or Machine Agent zip files, make sure to use different directories. Problem: Host ID for App Server Agent and Machine Agent do not match  Both the App Server Agent and the Machine Agent use the Java API to get the host ID. The results from the API can be inconsistent and the same JVM can sometimes return a different value for the same machine each time the Machine Agent is restarted.  When the host ID is then registered with the Controller the app server agent and the machine agent can be assigned different host IDs even though they are running on the same machine. Solution: Reset the hostname for the Machine Agent by running the agent with the -Dappdynamics.agent.uniqueHostId  JVM parameter. Set the host ID to be the same as the one the App Server Agent is using.  Problem: Difficulty with machine hostname resolution If the following error is seen in the Machine Agent log file, the cause is hostname resolution, which affects Machine Agent startup and registration. .ERROR XMLConfigManager - Error in Default Host Identifier Resolver resolving host name java.net.UnknownHostException: log-aggregate01: log-aggregate01   Solution: 1. Verify a valid hostname in the  /etc/hosts  file. Example: localhost    log-aggregate01 127.0.0.1         log-aggregate01 2. Save the changes. 3. Restart the Machine Agent.   Problem: Machine Agent reporting all metrics as zero You have verified in the Machine Agent log that the Machine Agent is only collecting zero-values.    Solution: 1. Change permissions on the Machine Agent installation folder for the user ID that the Machine Agent was started under  chmod -R 777 <machine-agent-install > . 2. Restart the Machine Agent and verify that the zero values are no longer being reported. 3. If restarting does not help, disable Sigar Hardware monitoring located in the  JavaHardwareMonitoring  directory. In the  monitor.xml  file (located at:  <machine-agent-install>/monitors/JavaHardwareMonitor/* ), change the enabled property to false. Example: <monitor> <name>SigarHardwareMonitor</name>  <type>managed</type>  <enabled>true</enabled> ---> change this to false –> <enabled>false</enabled> 4. Enable OS-specific hardware monitoring found in the  HardwareMonitoring  directory.  In  monitor.xml  file (location:  <machine-agent-install>/monitors/HardwareMonitor/* )  <monitor>  <name>HardwareMonitor</name>  <type>managed</type>  <enabled>false</enabled> ---> change this to true –> <enabled>true</enabled> 5. Restart the Machine Agent and verify that the zero values are no longer being reported.     Last Content Update 3/28/19   __________________ *As of 2020, the term "Standalone Machine Agent" has been discontinued in favor of simply "Machine Agent.
Problem: App Agent log is empty, or nothing is reported to the Controller. Common Solutions: If your agent is behind a network firewall or load balancer, open ports to enable communication be... See more...
Problem: App Agent log is empty, or nothing is reported to the Controller. Common Solutions: If your agent is behind a network firewall or load balancer, open ports to enable communication between the agent and the Controller. Avoid installing an App Agent into a directory used by the application server, such as a Tomcat directory. Always install the Java Agent to a directory of its own, such as \usr\local\agentsetup\appserveragent . The AppDynamics folder is installed by the same user who owns the application process. The AppDynamics folder must have read and write permissions. Check for the following error in the application logs while installing the Java Agent. This error means that the Java Agent folder is corrupt. Install a new Java Agent folder. Error opening zip file or JAR manifest missing : \usr\local\agentsetup\appserveragent\javaagent.jar Is the runtime directory writeable by the Java Agent? See Controller Port Settings. Check for network connectivity issues. If the agent is not able to connect to the Controller, the agent will disable itself. When the connection is available it will re-register again. Validate the application name, tier names, and node names. These are mandatory configuration parameters for an agent. Examine logs for errors such as the following:   Additional Information: Troubleshooting Agent Issues - Reporting and Connectivity How to resolve Java Agents not reporting Documentation: Troubleshooting Java Agent Issues Documentation: Connect the Controller and Agents Documentation: Dynamic Language Agent Proxy
Content revised  8/1/18 You may want to configure Error Detection to ignore known exceptions. For example, you may see a recurring exception thrown from a framework you are using. If the exception ... See more...
Content revised  8/1/18 You may want to configure Error Detection to ignore known exceptions. For example, you may see a recurring exception thrown from a framework you are using. If the exception is well-known and insignificant, you might want to eliminate it from the error list. Contents What you need to know about each error Example | Ignoring Error Messages Example | Excluding Exceptions Example | Ignoring Exceptions Related Resources What you need to know about each error The first thing you need to know for each message is whether AppDynamics is detecting it as a thrown exception or a message written to a log file with severity ERROR. If it is an exception, the snapshot's Error Details tab should have a stack trace. If you only see the error message, it's a logger. EXAMPLE | Logged error message: You can dispose of it by ignoring logged messages such as: Example | Ignoring Error Messages This is the error message: com.appdynamicspilot.persistence.ItemPersistence.com.appdynamicspilot.persistence. ItemPersistence : Critical transaction Error, rolling back changes. Order execution aborted. Below is the configuration for ignoring the error message: Example | Excluding Exceptions The next two examples are thrown exceptions, both from the same place: Response-xslFile :null; IsActivationCompleteSuccess:false; IsValidated :false; OrderType :QUERY; AccountNumber :null; CustomerQualification :null; CapErrorCode :00602; capErrorType :1; CapErrorCategory :CDE; CapErrorDesc :Account_Data_Discrepency; OperationMode :null; OperationStatus :null; ActivationDate :null SUBMIT_ACTIVATION_DETAILS : Transaction Id:xx-23341105-1 - Controller:handleRequest() > An Exception is caught Dependent object contains ... [details deleted] CarrierErrorCode :null; The exception message is :com.company.bb.cap.exceptions.CarrierBusinessRulesException: Controller:handleRequest(): Transaction failure response from Carrier. You can exclude these by specifying  com.company.bb.cap.exceptions.CarrierBusinessRulesException as the class and ignoring particular values of CarrierErrorCode (which are buried in the full message).   Is the exception nested? If it is a true exception, please note whether it's nested inside another exception. You'll need to specify the exact sequence in the Error Detection config. It will look like this: java.net.WebException : com.company.bb.cap.exceptions.CarrierBusinessRulesException: Controller:handleRequest(): Transaction failure response from Carrier .   Check the stack trace Pay close attention to what is in the stack trace. For example, a stack trace similar to the following might show up in the Error Details section of a snapshot: java.lang.reflect.InvocationTargetException:com.company.eppi.client.exceptions. ClientException: java.lang.reflect.InvocationTargetException.null at sun.reflect.GeneratedMethodAccessor1249.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) ... <details deleted> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) Caused by: com.comodo.epki.client.exceptions.ClientException : com.comodo.epki.client.exceptions.ClientException: Not authorized agent at com.comodo.epki.extra.agent.ws.CcmExtraAgentServer.authorizeAgent (CcmExtraAgentServer.java:210) at com.comodo.epki.extra.agent.ws.CcmExtraAgentServer.getExtraCommand (CcmExtraAgentServer.java:70) Don't create the exclusion on the "caused by" exception. Instead, use the first one in the stack trace. In the example above, you would set the Fully Qualified Class Names for Exceptions field as java.lang.reflect.InvocationTargetException:com.company.eppi.client.exceptions.ClientException . You would not put it on com.company.epki.client.exceptions.ClientException . The configuration for this example would like the screenshot below:   Ignoring Exceptions Example This is the exception: org.springframework.transaction.CannotCreateTransactionException:org.eclipse.persistence. exceptions.DatabaseException:org.apache.tomcat.dbcp.dbcp.SQLNestedException:java.util. NoSuchElementException   Below is the configuration to ignore the error message: Related Documentation For more detailed information on error detection and specifically how to ignore exceptions and log messages, see: Ignore Exceptions and Log Messages as Error Indicators.
Sometimes errors may appear in an appserver log, but not in AppDynamics. Or you may run your own specific test suites and see that known errors are not being detected by AppDynamics. Here are some th... See more...
Sometimes errors may appear in an appserver log, but not in AppDynamics. Or you may run your own specific test suites and see that known errors are not being detected by AppDynamics. Here are some things to check. 1. Confirm Logging Framework is Supported 2. Confirm Error Limits Were Not Hit 3. Confirm Configuration for Ignored Exceptions, Errors, and Loggers 4. Missing HTTP Error Codes 1. Confirm Logging Framework is Supported Your appserver may be using an unsupported logging framework. AppDynamics App Agent for Java supports the following logging frameworks: Log4j 2 java.util.logging New in 4.0 Simple Logging Facade for Java (SLF4J) New in 4.0 Logback Also see Java Supported Environments for the latest support. AppDynamics App Agent for .NET supports the following logging frameworks: Log4Net NLog Also see .NET Supported Environments Scope of Support In version 4.0, support was added for SLF4J, Logback, which  implements SLF4J under the covers, and for Log4j 2. The support extends to the following features of these logging libraries: SLF4J, Logback We support instrumenting classes that implement the slf4j interface. Logback uses slf4j natively, so we support logback also.   Supported Methods: Logger.error(String) Logger.error(Marker, String) Logger.error(String, Throwable) Logger.error(Marker, String, Throwable) We do not support SLF4J error passed objects, for example, error(java.lang.String,%20java.lang.Object...)    Log4j 2.0  We instrument out of the box anything that implements the log4j2 Logger interface. Specifically, we support: error(Marker marker, Message msg) error(Marker marker, Object message) error(Marker marker, String message) error(Message msg) error(Object message) error(String message) error(Marker marker, Message msg, Throwable t) error(Marker marker, Object message, Throwable t) error(Marker marker, String message, Throwable t) error(Message msg, Throwable t) error(Object message, Throwable t) error(String message, Throwable t) Also, fatal variants of all of the above are supported.  Notice that we don't support logger.logMessage(), log(), or any calls with Object ... params (meaning a Object[] params). We don't support the log() and logMessage() from ExtendedLogger. For additional logging support The solution is to configure a custom logger. See Configure a Custom Logger. 2. Confirm Error Limits Were Not Hit Agent Error Limit There is an agent metric limit of 5000 metrics that can be registered per Node and an agent limit of 500 ADDs (Application Diagnostic Data - this includes async threads, errors and exception registration, snapshots and so on). If this limit is reached and the Agent attempts to create metrics beyond this threshold, you see AGENT_METRIC_REG_LIMIT_REACHED alert in the event list. You can increase this default limit but that might cause an increase in overhead. Sometimes hitting this limit can be indicative of misconfigurations in your application. Hitting this limit and a similar limit in the Controller can indicate that you have hit the business transaction or backend limits and you may need to change the default discovery rules. What is a metric? A metric is an identifier used to uniquely identify a particular statistic. For example: Application Infrastructure Performance|Author|JVM|Memory|Heap|Committed (MB) Application Infrastructure Performance|Author|JVM|Memory|Heap|Used % Application Infrastructure Performance|Author|JVM|Process CPU Usage % All of the above are individual metrics registered from the node, against which the corresponding statistics data is collected and reported to the controller. At any particular point in time, the metric name remains the same but the value changes and that value is captured and reported.  This particular concept of a metric is internal to AppDynamics, however, it is helpful to understand how it works because of the self-imposed limits on the number of metrics that can be discovered. The limits help to minimize the AppDynamics footprint and overhead impact to an application. One limit is the maximum number of metrics that the agent creates. Once the limit is reached the agent does not create new metrics. Q: What is the impact of exceeding this 5000 limit? A. This limit is per agent. Once the limit is reached no new metrics are created, therefore no new activity is tracked. If you have more endpoints discovered those are not tracked. Q: If that is true then does restarting the agent from the console reset this limit and hopefully get new endpoints monitored while perhaps not picking up some old defunct ones that were working towards the 5000 limit? A: Once the metric is registered, it is present always for that agent whether or not load is present on that metric or not. For example, once metrics corresponding to an HTTP backend are registered, it doesn't matter whether there are calls or not to that backend, those metrics are always counted against the limit. In a case such as this, you could increase the maximum metric limit or you can delete the backends that are not being used to free up those metrics. You may also need to revise your backend configuration to avoid registering so many backends. Once you increase the limit (or free up metrics by deleting unneeded backends/components), it is not guaranteed that the new end points will be visible because it is possible that there are other statistics which will be detected first and use the added metrics capacity. For example, if there are async calls in the application, but the agent was not able to create metrics for them due to the limit being reached, once there is metric capacity, those async-call-related metrics might be created first before the new endpoints are detected. Solution: Revise Configuration Verify that you are not exceeding other limits such as backend limits and BT limits. Hitting the metric limit can be a warning sign of a configuration problem specific to your environment. For example, HTTP backends are discovered automatically using Port and Host properties. If the configuration were changed to use the entire URL, your might rapidly  exceed the backend limits and cause the metric limit to be hit, when the real problem is the HTTP backend configuration. Solution: Increase Limit The limit of 5000 suffices much of the time, but if truly needed it can be increased if you think there are calls missing or some functionality is not being captured by the agent.  On average, agents register 800 metrics across applications. The lower end is 300 and some applications produce 1500 metrics per agent. If agents need more than 5000 metrics, something else is often wrong and raising the limit rather obscures the underlying problem. To increase the limit see Metrics Limits documentation. NOTE: Before increasing the metric limit, be sure you have verified no other limits are being hit. Solution: Modify Default Error Detection Rules If there are errors or exceptions that are well-known and don't need to be monitoried, you can exclude them from detection and free up metric capacity. Review the documentation here: Configure Exceptions and Log Messages to Ignore. Controller Metric Limit There is a similar metric limit at the controller level. When this limit is reached, the controller issues the CONTROLLER_ERROR_ADD_REG_LIMIT_REACHED event. Solutions The recommended solution is to fine-tune the default error detection rules,  for example exclude the ones you're not interested in. Review the documentation here: Configure Exceptions and Log Messages to Ignore. Increase the default limit to 4000, for example:        a. Login to the admin page http://<controller ip>:<port>/controller/admin.html       b. Enter root password (default value is changeme)       c. Change the value of the property 'error.registration.limit' accordingly (see attached screenshot) Note: Increasing the limit incurs additional overhead so be sure to verify that you need to monitor all the discovered errors and exceptions. 3. Confirm Configuration for Ignored Exceptions, Errors, and Loggers If exclude rules are misconfigured, exceptions might be missed. Review the error detection configuration in your application. From Controller UI -> select Configure -> Instrumentation -> select Error Detection tab. For more details, see: Monitor Errors and Exceptions   4. Missing HTTP Error Codes AppDynamics reports error codes when the sendError method is used to report the error code. However, for some implementations of HttpServletResponse some HTTP errors are sent using setStatus. In this case, pset the capture-set-status (Java agent only) node property to true to capture these HTTP errors. For more details see the node property reference documentation: App Agent Node Properties Reference.
This article covers some reasons that a configured MBean metric might not show up in the Metric Browser. Confirm a Persistent Metric is Configured Not all MBeans are configured as persistent met... See more...
This article covers some reasons that a configured MBean metric might not show up in the Metric Browser. Confirm a Persistent Metric is Configured Not all MBeans are configured as persistent metrics in AppDynamics. First confirm that the information you want is exposed as a MBean and that the MBean attributes have been configured as an AppDynamics JMX metric. Determine if your app server exposes the information that you want to see using a tool such as JConsole. Use your app server documentation to find the Object name pattern for the MBean that you want to see. If the MBean you want is exposed, then you can create a metric for it in AppDynamics. Use this documentation to configure the Configure JMX Metrics from MBeans. Example: WebLogic Server The Primary Sessions and Sessions Replicas metrics from WebLogic Server are not instrumented by default in AppDynamics. Since WebLogic Server exposes the Session Replication information via JMX, you can configure a JMX metric that AppD can use for monitoring. 1. Use your app server documentation to find the specific Object Name pattern and the available attributes. For example: Primary Sessions: Provides the number of objects that the local server hosts as primaries. From MBean Attribute Reference: ReplicationRuntimeMBean.PrimaryCount Session Replicas: Provides the number of objects that the local server hosts as secondaries. From MBean Attribute Reference: ReplicationRuntimeMBean.SecondaryCount. 2. Locate the "ReplicationRuntimeMBean" in the MBean Browser and create a JMX metric on the attributes, "PrimaryCount" and "SecondaryCount". Confirm that All MBean Domains Were Discovered If you have created the necessary JMX rule and you still don't see the metric being reported, check the appserver startup time. When the appserver takes more than two minutes to start, the agent can't discover all the domains. In such scenarios, you can use the jmx-appserver-mbean-finder-delay-in-seconds node property to delay the discovery of MBeans to make sure that agent discovers all the domains after complete startup of the appserver. You might see logs similar to this: [AD Thread Pool-Global1] 27 Jun 2014 10:24:06,603 WARN WebSpherePMIStatsHandlerVersion2 - Stats is NULL for statDescriptor [threadPoolModule>WebContainer]. No metrics will be reported. Steps for using the jmx-appserver-mbean-finder-delay-in-seconds node property are here: Can Not See Expected MBeans. Confirm that MBean Limits Were Not Hit There are some limits associated with Mbean metrics. These limits can be adjusted using two node properties. Agent logs report when limits are being hit. For details on using node properties, see App Agent Node Properties. MBean Browser Limit The jmx-max-mbeans-to-load-per-domain node property controls the number of MBeans that are visible for each domain. The default value = 1000. Metric Browser Limit The jmx-max-metrics-to-report node property controls the total number of JMX metrics that are reported in the Metric Browser. The default = 500. Using Logs for Debugging The persistent JMX metrics created from MBeans follow the same life cycle as other AppDynamics monitored entities. The following logs are for an example JMX metric "jdbc/cartDS" created for Node-8003 of the ECommerce tier on the DataSource MBean using the MaxActive attribute. Searching in the agent log using the assigned name "jdbc/cartDS", you can find log entries similar to the following examples.  The MBean details appear in this entry: [AD Thread Pool-Global1] 03 Oct 2013 14:30:43,646 INFO ManagedObjectFactory - Instantiated JMX Managed object for bean=JDBCConnectionPool, category=JDBC Connection Pools, instance=Catalina:class=javax.sql.DataSource,host=localhost,name="jdbc/cartDS",path=/appdynamicspilot,type=DataSource This log snippet shows the attribute and the MBean pattern that was used to create the JMX metric. You can search on the metric name that you assigned. You can also search for "JMXMetricRepository" and it will show you the rule that was added as shown here: [AD Thread Pool-Global1] 03 Oct 2013 19:50:46,247 INFO JMXMetricRepository - Added new JMX Rule [JMXMetricRule [ MBeanQuery [MBeanQuery [ domain [Catalina], mbeanPattern [Catalina:type=DataSource,path=/appdynamicspilot,host=localhost,class=javax.sql.DataSource,name="jdbc/cartDS"], queryLogicalOperator [AND], queryExpressions []]], metricCategory [Individual Nodes], beanName [null], metricPath [null], instanceName [null], instanceIdentifier [null], name [jdbcCart-DataSource], domain [Catalina] Attribute Definitions [ JMX Attribute Definition [mbeanAttributeName [maxActive], metricName [maxActive], metricTimeRollupType [AVERAGE], metricClusterRollupType [INDIVIDUAL], metricAggregatorType [AVERAGE], metricHoleType [null], getterChain [] ] ] ]] This log snippet show sthe metric ID being received from controller = 3050. [AD Thread-Metric Reporter0] 03 Oct 2013 19:52:03,440 DEBUG MetricHandler - Added for metric registration [Metric Name[Server|Component:3|JMX|Individual Nodes:"jdbc/cartDS"|maxActive]] [AD Thread-Metric Reporter0] 03 Oct 2013 19:52:03,464 DEBUG MetricHandler - Response body for metric registration <request> <metric id="3050" name="Server|Component:3|JMX|Individual Nodes:&quot;jdbc/cartDS&quot;|maxActive"/> These entries show the metric value being reported to the controller at one minute intervals. [AD Thread-Metric Reporter0] 03 Oct 2013 19:55:53,445 DEBUG MetricHandler - 3050 Server|Component:3|JMX|Individual Nodes:"jdbc/cartDS"|maxActive 100 1049826925 [AD Thread-Metric Reporter0] 03 Oct 2013 19:56:53,445 DEBUG MetricHandler - 3050 Server|Component:3|JMX|Individual Nodes:"jdbc/cartDS"|maxActive 100 1049826925 [AD Thread-Metric Reporter0] 03 Oct 2013 19:57:53,445 DEBUG MetricHandler - 3050 Server|Component:3|JMX|Individual Nodes:"jdbc/cartDS"|maxActive 100 1049826925 [AD Thread-Metric Reporter0] 03 Oct 2013 19:58:53,445 DEBUG MetricHandler - 3050 Server|Component:3|JMX|Individual Nodes:"jdbc/cartDS"|maxActive 100 1049826925
AppDynamics provides out-of-the-box configuration of JMX/PMI metrics based on the MBeans exposed by many commonly used app servers.  The MBean Browser is accessed on the Node dashboard from the JM... See more...
AppDynamics provides out-of-the-box configuration of JMX/PMI metrics based on the MBeans exposed by many commonly used app servers.  The MBean Browser is accessed on the Node dashboard from the JMX tab. The browser is used to look at MBean metric values for short-term troubleshooting.  For long-term continuous monitoring, you can create a metric based on the MBean statistics exposed by your app server. You do this using the Configure -> Instrumentation -> JMX tab. For details, see Configure JMX Metrics from MBeans. When the app server starts up, the associated MBean server starts and the MBeans are discovered. The timing of these activities can vary by app server and by configuration. The AppDynamics agent discovers the MBean server associated with running app servers in the application environment and reads the exposed MBeans to get all JMX related statistics. The path for JMX statistics is embedded in the MBean configuration which is set internally when the server registers those MBeans to the MBean server during startup. The AppDynamics agent uses MBean specific APIs to get the MBeans, their object pattern and path out of those registered MBeans. If the server does not expose the MBeans, the AppDynamics agent can't see them. When you don't see the MBeans you expect, use the following flowchart and techniques to troubleshoot the issue. 1. Enable MBeans The MBeans need to be enabled and exposed by your JVM/app server. Issue: MBeans Not Exposed One reason that you might not see MBeans in the AppDynamics MBean Browser is that they are not enabled on your app server or not exposed by the app server and therefore the AppD agents are not able to get the data related to those statistics. The JMX agent (also known as the MBean server) needs to be enabled on the JVM or app server. And the specific statistics that you want to see need to be exposed via Mbean monitoring by that appserver. Solution: Confirm MBean Availability To verify that the MBeans are available, use an independent tool such as JConsole. If the MBean is visible under one of those tools then move to the next issue and confirm that there is enough time for the agent to discover the MBeans. If the MBeans are not visible in JConsole, then AppD agent cannot get the data either. View the documentation for your server to enable the MBeans and review which JMX statistics are being exposed for use. See Oracle's documentation for JConsole here: JConsole. Issue: Specific MBean Not Visible Sometimes an app server does not automatically expose the information that you want to see. Solution: Enable MBean Use the app server documentation to locate the statistics that you want to monitor and then use the app server's admin console to enable the statistics. Use a tool such as JConsole to confirm that the MBean statistics that you want are visible. 2. Agent Discovery of MBeans The AppDynamics agent tries to capture the MBean domains and JMX/PMI statistics within two minutes of app server start or restart, assuming all the domains and MBeans can be discovered within that time. If the MBean server (associated with your app server) is not started in that time frame or not started at all, the MBeans can not be discovered. Issue: App Server Start time When an app server starts up, the associated MBean server starts and the MBeans are discovered. The timing of these activities varies by app server and by configuration. If this activity is not completed in the time that the AppD agent is expecting to discover the MBeans, then the MBean Browser will not show them. Solution: Delay Discovery You can delay the discovery of MBeans to make sure that agent discovers all the domains after complete start up of the app server. The default delay for AppD agent is two minutes. You can increase this time using the jmx-appserver-mbean-finder-delay-in-seconds node property.    To use the node property: 1. Register the jmx-appserver-mbean-finder-delay-in-seconds node property from the Node Dashboard. Use these steps: Add a Node Property. 2. Enter a value, such as "300". Set the delay to a time which is 1.5 times of your app server 's startup time. 3. Restart the JVM/app server. Solution: Trigger Rediscovery You can trigger the rediscovery of MBeans to make sure that the agent discovers all the domains after complete start up of the app server by using the jmx-rediscover-mbean-servers  node property. 1. Register the jmx-rediscover-mbean-servers node property from the Node Dashboard. Use these steps: Add a Node Property. 2. Enter the value "true". 3. Restart the JVM/server. 3. Other Troubleshooting Issue: MBean Limits There are two limits: Per Domain and Attribute. Per Domain Limit With some app servers, it is possible to exceed the MBean count limit for a domain. The limit is controlled by the jmx-max-mbeans-to-load-per-domain node property. The default value is 1000. Attribute Limit With some app servers, it is possible to exceed the MBean attribute limit. The limit is controlled by the jmx-max-mbean-attributes-to-load node property. The default value is 1000. Solution: Increase the Limit 1. Register the appropriate limit node property from the Node Dashboard. Use these steps: Add a Node Property. 2. Enter an Integer value greater than the default value. 3. Restart the JVM/server. Issue: Last Resort If none of the previous techniques have solved your problem, you can get an XML file that shows all the MBeans in each domain. Sometimes, the object name patterns can change between app server releases and reviewing the XML file might help you debug special corner cases. Solution: Generate and Review XML for the MBeans Setting the discover-mbeans node property to true causes the agent to discover all MBeans in a JVM/app server and generate XML files in the $AGENT_RUNTIME_DIR/conf/nodeDir/discovered=mbeans directory.  For each MBean domain a corresponding XML file containing all MBeans for that domain is created. For example: for two domains such as "java.lang" and "Catalina", the XML files are named: Catalina-jmx-config.xml java.lang-jmx.config.xml You can examine the XML file for your MBean.   To use the discover-mbeans node property: ========== 1. Register the discover-mbeans node property rom the Node Dashboard. Use these steps: Add a Node Property. 2. Enter the value "true". 3. Restart the JVM/server. ========== Examining Logs for MBean Information MBean and JMX Metric Logs The following log entries show the MBean finder is initialized with the two minute delay described above. Thread-0] 03 Aug 2014 00:15:44,189 INFO ServerMBeanManagerVersion2 - Initialized MBean Finder with delay=120 secs [Thread-0] 03 Aug 2014 00:15:44,191 INFO JMXService - Server JMX metric collection initialized with update interval [60] seconds The following log entries show that no MBean server was detected by the agent. In this example, the application being monitoring did not contain any of these servers. [AD Thread Pool-Global0] 07 Aug 2014 22:09:52,204 DEBUG BaseMBeanServerReporterVersion2 - No mbean servers exist for domain [jboss.web]. No metrics will be reported [AD Thread Pool-Global0] 07 Aug 2014 22:09:52,204 DEBUG BaseMBeanServerReporterVersion2 - No mbean servers exist for domain [WebSpherePMI]. No metrics will be reported [AD Thread Pool-Global0] 07 Aug 2014 22:09:52,204 DEBUG BaseMBeanServerReporterVersion2 - No mbean servers exist for domain [WebSpherePMI]. No metrics will be reported [AD Thread Pool-Global0] 07 Aug 2014 22:09:52,204 DEBUG BaseMBeanServerReporterVersion2 - No mbean servers exist for domain [org.apache.cassandra.net]. No metrics will be reported  
Updated on 9/5/18 Question My application uses a MariaDB client. It uses c3pO JDBC drivers with JNDI-bindable DataSources, including DataSources that implement Connection and Statement Pooling, a... See more...
Updated on 9/5/18 Question My application uses a MariaDB client. It uses c3pO JDBC drivers with JNDI-bindable DataSources, including DataSources that implement Connection and Statement Pooling, as described by the jdbc3 spec and jdbc2 std extension. How do I capture database calls? Answer For MariaDB client 1.4.4 We do not support MariaDB clients out of the box. You will need to set up the below jdbc node level properties to achieve the DB queries.  jdbc-callable-statements=org.mariadb.jdbc.AbstractCallableFunctionStatement, org.mariadb.jdbc.AbstractCallableProcedureStatement, org.mariadb.jdbc.internal.util.dao.CloneableCallableStatement jdbc-prepared-statements=org.mariadb.jdbc.AbstractMariaDbPrepareStatement jdbc-connections=org.mariadb.jdbc.MariaDbConnection jdbc-statements=org.mariadb.jdbc.MariaDbStatement Please refer to the documents below for instructions on how to set node level properties and additional details about these properties: App Agent Node Properties jdbc-callable-statements For MariaDB client <1.4.4 Note: This solution dates from January 2015. Refer to the instructions in Using Node Properties to Detect JDBC Backends and try using the following JDBC node properties and values: jdbc-callable-statements: value="org.mariadb.jdbc.MySQLCallableStatement" jdbc-connections: value="org.mariadb.jdbc.MySQLConnection" jdbc-prepared-statements: value="org.mariadb.jdbc.MySQLPreparedStatement,org.mariadb.jdbc.MySQLServerSidePreparedStatement" jdbc-statements: value="org.mariadb.jdbc.MySQLStatement"
Question My application uses a Greenplum database. The calls to the database are JDBC, but we are not discovering the database out of the box. How can I fix this? Answer Refer to Using Node Pr... See more...
Question My application uses a Greenplum database. The calls to the database are JDBC, but we are not discovering the database out of the box. How can I fix this? Answer Refer to Using Node Properties to Detect JDBC Backends for more details on using these node properties. The values to use for Greenplum are detailed below. Greenplum Simple using greenplum.jar Use the following JDBC node properties and values: jdbc-connections: value="com.ddtek.jdbc.greenplumbase.BaseConnection,com.ddtek.jdbcspygreenplum.SpyConnection,com.ddtek.jdbcx.greenplumbase.ddd" jdbc-prepared-statements: value="com.ddtek.jdbc.greenplumbase.dddw,com.ddtek.jdbcspygreenplum.SpyPreparedStatement,com.ddtek.jdbcx.greenplumbase.ddn" jdbc-statements: value="com.ddtek.jdbc.greenplumbase.dde_,com.ddtek.jdbc.greenplumbase.dde,com.ddtek.jdbcspygreenplum.SpyStatement,com.ddtek.jdbcx.greenplumbase.ddu" jdbc-callable-statements: value="com.ddtek.jdbc.greenplumbase.ddk,com.ddtek.jdbc.greenplumbase.ddm,com.ddtek.jdbcspygreenplum.SpyCallableStatement,com.ddtek.jdbcx.greenplumbase.dda" Greenplum Deluxe includes Pivotal Greenplum drivers Use the following JDBC node properties and values: jdbc-connections: value="com.ddtek.jdbc.greenplumbase.BaseConnection,com.ddtek.jdbcspygreenplum.SpyConnection,com.ddtek.jdbcx.greenplumbase.ddd,com.pivotal.jdbc.greenplumbase.BaseConnection,com.pivotal.jdbcspygreenplum.SpyConnection,com.pivotal.jdbcx.greenplumbase.ddf" jdbc-prepared-statements: value="com.ddtek.jdbc.greenplumbase.dddw,com.ddtek.jdbcspygreenplum.SpyPreparedStatement,com.ddtek.jdbcx.greenplumbase.ddn,com.pivotal.jdbc.greenplumbase.dddk,com.pivotal.jdbcspygreenplum.SpyPreparedStatement,com.pivotal.jdbcx.greenplumbase.ddp" jdbc-statements: value="com.ddtek.jdbc.greenplumbase.dde_,com.ddtek.jdbc.greenplumbase.dde,com.ddtek.jdbcspygreenplum.SpyStatement,com.ddtek.jdbcx.greenplumbase.ddu,com.pivotal.jdbc.greenplumbase.ddd_,com.pivotal.jdbcspygreenplum.SpyStatement,com.pivotal.jdbcx.greenplumbase.ddw" jdbc-callable-statements: value="com.ddtek.jdbc.greenplumbase.ddk,com.ddtek.jdbc.greenplumbase.ddm,com.ddtek.jdbcspygreenplum.SpyCallableStatement,com.ddtek.jdbcx.greenplumbase.dda,com.pivotal.jdbc.greenplumbase.dde1,com.pivotal.jdbc.greenplumbase.dde3,com.pivotal.jdbcspygreenplum.SpyCallableStatement,com.pivotal.jdbcx.greenplumbase.ddb,cs.jdbc.driver"
Symptom JDBC calls to a Vertica database are not being seen by AppDynamics. What can be done? Solution Use the following node propeties and values: jdbc-prepared-statements = com.vertica... See more...
Symptom JDBC calls to a Vertica database are not being seen by AppDynamics. What can be done? Solution Use the following node propeties and values: jdbc-prepared-statements = com.vertica.jdbc.SPreparedStatement,com.vertica.jdbc.VerticaPreparedStatement   jdbc-callable-statements= com.vertica.jdbc.SCallableStatement   jdbc-statements = com.vertica.jdbc.SStatement,com.vertica.jdbc.VerticaStatement   jdbc-connections = com.vertica.jdbc.VerticaConnection,com.vertica.jdbc.SConnection,com.vertica.jdbc.SConnectionHandle   For more details, see Using Node Properties to Detect JDBC Backends.  
Question I am using a TeraData database and it's not showing up as a Database Server in my application. How can I fix this? Answer Refer to this article Using Node Properties to Detect JDBC Ba... See more...
Question I am using a TeraData database and it's not showing up as a Database Server in my application. How can I fix this? Answer Refer to this article Using Node Properties to Detect JDBC Backends and use the following JDBC node properties and values: jdbc-statements: value= "com.teradata.jdbc.TeraStatement" jdbc-connections: value= "com.teradata.jdbc.TeraConnection" jdbc-prepared-statements: value= "com.teradata.jdbc.TeraPreparedStatement" jdbc-callable-statements: value= "com.teradata.jdbc.TeraCallableStatement"
Detect JDBC Databases Sometimes you can use one or more of the JDBC Agent node properties to get visibility for JDBC servers that are not discovered automatically by AppDynamics App Agents.   I... See more...
Detect JDBC Databases Sometimes you can use one or more of the JDBC Agent node properties to get visibility for JDBC servers that are not discovered automatically by AppDynamics App Agents.   If there's a JDBC driver for the database we can often apply our standard instrumentation. This involves getting the driver JAR and decompiling it to get the classes that implement the relevant interfaces. You can apply the instrumentation for Connection, Statement, CallableStatement, and PreparedStatement interfaces using the following node properties through the Controller UI: jdbc-prepared-statements  jdbc-callable-statements jdbc-statements  jdbc-connections   Enable SQL Capture You can also use these properties to enable SQL capture for any JDBC-compliant data source that is not instrumented by default. For example, for SQLite the values are: jdbc-statements: value= "org.sqlite.Stmt" jdbc-connections: value= "org.sqlite.SQLiteConnection" jdbc-prepared-statements: value= "org.sqlite.PrepStmt"   Set the Node Property Values Access the Node Dashboard to edit the value of these properties. Use these steps: Edit Registered Node Property. Tip: Separate multiple class names using a ',' (comma) as a separator in the agent node properties configuration. Related Topics JTurbo Vertica Analytic Database Teradata MariaDB Greenplum
This dashboard delves deep into the performance of individual nodes present in the system. To use this dashboard layout, import the attached Ops-Dashboard.xml file and follow the instruction... See more...
This dashboard delves deep into the performance of individual nodes present in the system. To use this dashboard layout, import the attached Ops-Dashboard.xml file and follow the instructions to rebind similar metrics in another application. Log in to your Controller UI. Navigate to the Custom Dashboards list screen. Import the Ops-Dashboard.xml file. Rebind the metrics which correspond for your particular application. To do this, you edit each displayed widget in the dashboard, select your application and then confirm or select the metric for that display.  If you need detailed instructions for working with custom dashboard widgets, please visit docs.appdynamics.com and view Create Custom Dashboards.
This Ops Management dashboard exposes the overall application performance by correlating end user experience with the application backend performance. This allows for deep visibility into all aspects... See more...
This Ops Management dashboard exposes the overall application performance by correlating end user experience with the application backend performance. This allows for deep visibility into all aspects of your application from a single dashboard. To use this dashboard layout, import the attached Ops-Management-Dashboard.xml file and follow the instructions to rebind similar metrics in another application. Log in to your Controller UI. Navigate to the Custom Dashboards list screen. Import the Ops-Management-Dashboard.xml file. Rebind the metrics which correspond for your particular application. To do this, you edit each displayed widget in the dashboard, select your application and then confirm or select the metric for that display.  If you need detailed instructions for working with custom dashboard widgets, please visit docs.appdynamics.com and view Create Custom Dashboards.