All TKB Articles in Learn Splunk

Learn Splunk

All TKB Articles in Learn Splunk

You can change the root user password from the AppDynamics Administration Console page. Note: If using the "Forgot Password" link, the emailed token will expire four hours after it is sent. The ... See more...
You can change the root user password from the AppDynamics Administration Console page. Note: If using the "Forgot Password" link, the emailed token will expire four hours after it is sent. The duration of this token is not configurable. To change the root user password: In a browser, log in to the administration console as described in Access the Administration Console. Click the gear icon and choose My Settings. Click Edit > Change Password. Type the new password for the root user in the New Password and Repeat New Password fields. Click Save.  Note: Logging in to the Administration Console requires you to have the root user password. If you do not have the root user password and need to reset it, follow these instructions: To reset root user password:  If you have lost the AppDynamics root user password for your installation and need to reset it, follow these steps: From the command line, change to the Controller's bin directory. For example, on Linux: cd <controller_home>/bin Use the following script to log into the Controller database of the Controller; For Windows: controller.bat login-db For Linux: sh controller.sh login-db You should see a MySQL prompt. At the MySQL prompt, enter the following SQL command to get root user details: select * from user where name='root' \G; Use the following SQL command to change the password:  update user set encrypted_password = sha1('<NewPassword>') where name = 'root'; The hash for the password will be upgraded to PBKDF2 when you log in. For information on setting the database root user password, see Controller Data and Backups.  Relevant Links: User Management Controller Data and Backups Manage Users and Groups Overview
SAP Hybris with AppDynamics - Customization and setup It is possible to incorporate AppDynamics into  SAP Hybris in order to create a seamless customer experience across multiple channels using Br... See more...
SAP Hybris with AppDynamics - Customization and setup It is possible to incorporate AppDynamics into  SAP Hybris in order to create a seamless customer experience across multiple channels using Browser and Mobile Real-User Monitoring (RUM). You can use the following capabilities: Application Performance Management Database Monitoring Server Monitoring Browser RUM Application Analytics Log Analytics SAP Hybris is extremely customizable, so it's possible that you won't be able to implement all of the configurations presented here. Table of Contents AppDynamics Setup: How to install the Java Agent  Database Monitoring Server Monitoring Browser Real-User Monitoring Application Analysis AppDynamics Setup: How to install the Java Agent Execute the following steps to add the Java APM agent to your Hybris environment. Add the -javaagent parameter to your local.properties file In the Hybris home directory, open the ${HYBRIS_DIR}/config/local.properties file in a text editor and add the -javaagent  parameter to the tomcat general options. Example: tomcat.generaloptions=&lt;your existing settings&gt; -javaagent:${HYBRIS_BIN_DIR}/appdynamics/javaagent.jar You can add the agent into the Hybris bin directory - as suggested above - or at any other location on your machine serving SAP Hybris. Configure the Java Agent Configuring the AppDynamics Java Agent for SAP Hybris is no different than any other Java application. Follow the instructions on how to  install the java agent Note: If the Hybris layer is separated into nodes serving the back office and the storefront, add this information to your tier and node configuration. For example, you can name your tiers hybris-backoffice  and hybris-frontstore . Build and restart the Hybris server Load the ant environment variables ( . ./setantenv.sh ) Run the server target. ( ant server ) Restart the Hybris server. ( ./hybrisserver stop ; ./hybrisserver start ) Full Example: [hybris/bin/platform]# . ./setantenv.sh [hybris/bin/platform]# ant server [hybris/bin/platform]# ./hybrisserver stop [hybris/bin/platform]# ./hybrisserver start​ The execution of these commands may depend on your setup and deployment strategy. After restarting the Hybris server, AppDynamics will start to monitor the application. After a few minutes of load on the Hybris system, you will see your Hybris environment represented on the AppDynamics flow map. Temporary Option Another option, for those who can't run Ant Server or who want quick results, is to add wrapper.java.additional.&lt;number&gt;=-javaagent:${HYBRIS_BIN_DIR}/appdynamics/javaagent.jar to the hybris/bin/platform/tomcat/conf/wrapper.conf , to your ${HYBRIS_DIR}/config/local.properties  file, where the number is unused in the existing configuration. Note: This is only a temporary solution and will be overwritten after a new Hybris deployment. Database Monitoring If SAP Hybris is connected to a database supported by AppDynamics (see here for a full list of supported databases), follow the instructions for Database Monitoring. Server Monitoring For the monitoring of the underlying infrastructure, follow the instructions for Server Monitoring. Browser Real-User Monitoring To instrument SAP Hybris for Browser RUM, insert the JavaScript agent file on the server side into an HTML template, that is returned to the end-user as part of the normal process it follows. You can choose between two procedures: If your front end layer consists of Apache or Nginx web servers, you can use the Container Assisted Injection Or you can follow the instructions for the Manual Injection. If the given SAP Hybris installation is based on the yacceleratorstorefront you can inject the adrum.js in the analytics.tag  file. Example: tag body-content="empty" trimDirectiveWhitespaces="true" taglib prefix= "analytics" tagdir="/WEB-INF/tags/shared/analytics" scriptwindow['adrum-start-time']=new Date().getTime(); /script script type= "text/javascript" src="${sharedResourcePath}/js/adrum.js" /script script type="text/javascript" src="${sharedResourcePath}/js/analyticsmediator.js" /script analytics:googleAnalytics/analytics:jirafe/​ Note: Manual injection requires a new build and a restart of the Hybris server. Having added the adrum.js to your Hybris environment will allow you to monitor how the end-user perceives the performance of your e-commerce platform. Use Session Monitoring to see how users are navigating through your online store. Application Analytics Install the Agent-Side Components for Application Analytics to add transactions as well as log analytics for the given SAP Hybris environment.
Symptoms The following error appears in the EUM-processor log, or while querying the accounts table from the EUM database schema. Failed to query account table: java.sql.SQLException: Can't fi... See more...
Symptoms The following error appears in the EUM-processor log, or while querying the accounts table from the EUM database schema. Failed to query account table: java.sql.SQLException: Can't find file: './eum_db/accounts.frm' (errno: 13) Diagnosis The "Accounts.frm" file is missing or does not have the correct user permissions. The file is located at  <Controller_Home>/db/data/eum_db/accounts.frm. Solution If the file is located in the proper path, the problem is a permissions issue. To resolve the error, enter the following command in the terminal for the Controller host to change the ownership of the file to the correct user and group. For example, the user/group through which the Controller was installed. Then restart the EUM-processor to resolve the issue. chown -R <user>:<group> /AppD4214/Controller/db/ Example: chown -R appdyn:appdyn <Controller_Home>/db/ If the file is missing, contact AppDynamics support for further troubleshooting, as it could be caused by database corruption.
Linux users can use the following steps to determine which of their Java threads is consuming the most CPU. From your computer terminal: 1. First, find the Java process ID. ps -ef |grep java ... See more...
Linux users can use the following steps to determine which of their Java threads is consuming the most CPU. From your computer terminal: 1. First, find the Java process ID. ps -ef |grep java 2. Use the Java process ID (PID) to pull the lightweight processes into a file named lwp.txt . ps -eLo pid,lwp,nlwp,pcpu,etime,args|grep <pid> > lwp.txt 3. Use the PID to generate a Java thread dump. kill -3 <pid> 4. Choose the lightweight process (LWP) that is consuming the most CPU from the lwp.txt file created in step 2, and convert the LWP ID from decimal to hexadecimal using the tool of your choice (for example, binaryhexconverter.com). Example: Convert LWP DEC 4235 to HEX 108B. 5. Search the thread dump created in step 3 for the hexadecimal LWP ID to find the Java thread that is consuming the most CPU. 6. If the problematic Java thread is part of AppDynamics, contact support for further assistance. If it is an application thread, contact your development/application team to debug the issue further.
When making a connection using HTTPS, either SSL or TLS encrypts the information being sent to and from the EUM server. The information is encrypted using a cipher or encryption key. The type of ci... See more...
When making a connection using HTTPS, either SSL or TLS encrypts the information being sent to and from the EUM server. The information is encrypted using a cipher or encryption key. The type of cipher depends on the cipher suite being used.   Use Google Chrome and the following steps to determine which cipher suite is used to secure the HTTPS connection with the EUM 'adrum' beacons.  Open Google Chrome and launch Chrome developer tools (ALT+CMD+I) by clicking on View > Developer > Developer Tools in the browser toolbar. Load your EUM-instrumented application in the browser. Click the 'Network' tab in Chrome developer tools and use the search bar to look for the beacon request named 'adrum'. Click on the request, then click the "Headers" tab on the right side of the screen. Copy the request URL. Example:  https://eum.server.com:7002/eumcollector/beacons/browser/v1/EUM-AAB-AUA/adrum Open a new window or tab in your browser and paste the request URL. Open Chrome developer tools in the new browser and click on the "Security" tab. Your cipher information is located under "Connection." Example:
AppDynamics Events Service is the on-premise data storage facility for unstructured data generated by Application Analytics, Database Visibility, and End User Monitoring (EUM) deployments. To co... See more...
AppDynamics Events Service is the on-premise data storage facility for unstructured data generated by Application Analytics, Database Visibility, and End User Monitoring (EUM) deployments. To configure EUM to push raw data to Events Service, edit the eum.properties file to reflect the following properties: analytics.enabled=true analytics.serverScheme=http analytics.serverHost=events.service.hostname analytics.port=9080 analytics.accountAccessKey=1a59d1ac-4c35-4df1-9c5d-5fc191003441 Note: EUM will push EUM data to Events Service with or without an explicit analytics license. How to query EUM data with an analytics license  To query the EUM data from Events Service using the analytics API, follow the directions in our analytics API documentation (version 4.3). How to query EUM data with no explicit analytics license  When there is no explicit analytics license, you will need to change the authentication method outlined in the analytics API documentation. For this authentication, use the EUM account name and EUM license key which can be found on the license page by clicking on the gear menu in the upper-right corner of the controller UI, and then clicking on License.   Authenticate with curl One option is to use the following curl syntax for authentication: curl --user "<eum_account_name>:<eum_license_key>"..... Authenticate with base64 encoding Another option is to use the following base64 encoding syntax for authentication: base64accountname=<eum_account_name> base64licensekey=<eum_license_key> base64auth=`echo "$base64accountname:$base64licensekey" | base64 -b 0` echo $base64auth QXBwRHluYW1pY3MteHh4eHg6OWJhOWJkM2IteHh4eAo= curl -H"Authorization: Basic $base64auth".... Types of queries  After choosing the method of authentication, you can make queries to Events Service for the following types of data: Event Type Events Service Table Name Browser Records browser_records Mobile Records mobile_snapshots Mobile Crash Reports mobile_crash_reports Browser Sessions web_session_records Mobile Sessions mobile_session_records   API example With the chosen authentication method and chosen Events Service endpoint, use the following API script to make your query: curl --user "<EUM_Account_Name>:<EuM_License_Key>" -H"Content-type: application/vnd.appd.events+json;v=2" http://<events_service_endpoint>:9080/events/query -d ' select * from browser_records ' or curl -H"Authorization: Basic $base64auth" -H"Content-type: application/vnd.appd.events+json;v=2" http://<events_service_endpoint>:9080/events/query -d ' select * from browser_records ' Note: In some on-premise deployments, the EUM account license key may have changed in a license renewal which can cause authorization to fail. If the on-premise Events Service node(s) have   ad.es.node.http.enabled=true   configured in the events-service-api-store.properties file then the following cURL command can be used to find the correct EUM license key value. Please update the $ES_NODE variable to use the IP address or hostname of an/the Events Service node. Example: $ ES_NODE="192.168.102.171" $ curl -s "http:/${ES_NODE}:9200/appdynamics_accounts_v2/_search" | sed $'s/,/\\\n/g'| grep "eumAccountName" -B 3 | head -2 "accountName":"test-eum-account-XXXXXXXXX-1518596264712" "accessKey":"52218919-ddbb-XXXX-XXXX-edec188e4b73" Additional parameters For additional information, such as start and end parameters, follow the analytics API documentation.  
Symptoms While running the Machine Agent, you may see the following error in the Machine Agent logs: ERROR RawCollectorUtil - Could not collect raw data com.fasterxml.jackson. databind.JsonM... See more...
Symptoms While running the Machine Agent, you may see the following error in the Machine Agent logs: ERROR RawCollectorUtil - Could not collect raw data com.fasterxml.jackson. databind.JsonMappingException: No content to map due to end-of-input at [Source: ; line: 1, column: 1] at com.fasterxml.jackson. databind.JsonMappingException. from(JsonMappingException. java:148) at com.fasterxml.jackson. databind.ObjectMapper._ initForReading(ObjectMapper. java:3747) at com.fasterxml.jackson. databind.ObjectMapper._ readMapAndClose(ObjectMapper. java:3687) at com.fasterxml.jackson. databind.ObjectMapper. readValue(ObjectMapper.java: 2714) at com.appdynamics.sim.agent. extensions.servers.model. RawCollectorUtil.runCollector( RawCollectorUtil.java:101) at com.appdynamics.sim.agent. extensions.servers.model. RawCollectorUtil.runCollector( RawCollectorUtil.java:67) at com.appdynamics.sim.agent. extensions.servers.model. newlinux.NewLinuxRawCollector. collectRawData( NewLinuxRawCollector.java:62) at com.appdynamics.sim.agent. extensions.servers.model. newlinux.NewLinuxRawCollector. collectRawData( NewLinuxRawCollector.java:36) at com.appdynamics.sim.agent. extensions.servers.model. Server.collectAndReport( Server.java:43) at com.appdynamics.sim.agent. extensions.servers. ServersMonitor.run( ServersMonitor.java:90) at java.util.concurrent. Executors$RunnableAdapter. call(Executors.java:511) at java.util.concurrent. FutureTask.runAndReset( FutureTask.java:308) at java.util.concurrent. ScheduledThreadPoolExecutor$ ScheduledFutureTask.access$ 301( ScheduledThreadPoolExecutor. java:180) at java.util.concurrent. ScheduledThreadPoolExecutor$ ScheduledFutureTask.run( ScheduledThreadPoolExecutor. java:294) at java.util.concurrent. ThreadPoolExecutor.runWorker( ThreadPoolExecutor.java:1142) at java.util.concurrent. ThreadPoolExecutor$Worker.run( ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread. java:745) ERROR RawCollectorUtil - The standard out from the collector script was:  WARN RawCollectorUtil - The error log from the collector script was: ERROR: missing ip When this occurs, the Machine Agent is not able to collect machine-level metrics. Diagnosis This error occurs when the IP command is not in the PATH for the Machine Agent user. Without the correctly configured PATH, it is impossible for the Machine Agent to use SIM to collect the network metadata. To verify the root cause, run the following commands in the shell prompt on the server that runs the Machine Agent. shell> ip a shell > which ip shell> echo $PATH The following results confirm the root cause: bash: ip: command not found which ip /usr/bin/which: no ip in no ip present is path Solution 1) Stop the Machine Agent. 2) Set IP as an environment variable for the user who is running the Machine Agent.  "ip" utility is usually in /sbin folder.  Check if /sbin/ip -V returns ip version and if it returns then ip is in /sbin . Add /sbin to PATH environment variable Reference: ip(8) - Linux man page 3) Start the Machine Agent.
A load balancer is a device that distributes network or application traffic across a number of servers. Load balancers are used to increase capacity and reliability of applications. When properl... See more...
A load balancer is a device that distributes network or application traffic across a number of servers. Load balancers are used to increase capacity and reliability of applications. When properly configured, F5 iRules utilize a scripting syntax which allows the load balancer to intercept, inspect, transform, and direct inbound or outbound application traffic. In the event the application's source is unable to be modified and all other forms of injection have failed or are not applicable (i.e. assisted injection through Java rules for a PHP-based application), clients with a F5 load balancer can request their network security team configure an iRule to intercept an application's response and inject HTML into the page source necessary for enabling EUM. This is similar to the format that manual injection uses, but HTML is inserted into the webserver's response rather than into the original source code. For instructions on how to build, deploy, and test an iRule, we recommend you consult your F5 support team and network security team as they are most knowledgeable and equipped to handle such a request. From an AppDynanics perspective, you must be properly licensed for EUM and have End User Monitoring (EUM) enabled for an application in your AppDynamics controller UI. Clients who have the limited option of injection via F5 iRules have seen success using a template similar to the iRule listed below. Generic F5 iRule Template (requires further customization) when HTTP_REQUEST { # This is the condition for which requests will be matched against if {[HTTP::uri] contains "segment/in/uri"} { set enableEum 1 } else { set enableEum 0 } # Disable the stream filter for client requests as we are only interested in the server response STREAM::disable # LTM does not uncompress response content, so if the server has compression enabled # and it cannot be disabled on the server, we can prevent the server from # sending a compressed response by removing the compression offerings from the client # HTTP::header remove "Accept-Encoding" } when HTTP_RESPONSE { # Disable the stream filter for all server responses STREAM::disable # Inserts the necessary JavaScript for EUM if {($enableEum == 1) && ([HTTP::header "Content-Type"] starts_with "text/html")} { STREAM::expression { @</title>@</title> <script> window["adrum-app-key"] = "AAB-AA-AUA"; window["adrum-start-time"] = new Date().getTime(); </script> <script type="text/javascript" src="http://cdn.appdynamics.com/adrum/adrum-latest.js"></script>@} # Enable the stream filter for this response only STREAM::enable } } How to use this template 1) Matching Condition - The matching condition in your first "if statement." Change this segment in the above template to match a specific piece of your application. In the template we use the application's URI, however, there are other properties of the application which can be matched up (i.e [HTTP::path], [HTTP::host], etc.).  [HTTP::uri] contains "segment/in/uri"   2) Compression - Does your application use compression? If it does, uncomment the following line from the template. HTTP::header remove "Accept-Encoding" If there isn't any compression, then keep the line commented out. # HTTP::header remove "Accept-Encoding"   3) EUM Application Key - What is your EUM Application Key as assigned in your Controller's UI? This can be found by accessing your EUM Application Configuration via the controller UI. The key is typically eight letters long. Change the example key (AAB-AA-AUA) in the above template to your specific application key. window["adrum-app-key"] = "AAB-AA-AUA";   4) JavaScript Agent Location - Where is the JavaScript agent being hosted? The file (adrum.js or adrum-latest.js) will either need to be hosted with your application or can be hosted in our content distribution network (CDN). If you intend to host the file yourself, update the address between the <script> tags of the iRule template (which is pointing the script src to the AppDynamics CDN). http://cdn.appdynamics.com/adrum/adrum-latest.js  5) Other points to consider In some cases the Stream profile must re-enable response rechunking as documented in https://support.f5.com/csp/article/K6422. Although the above script template is typically successful, every client environment is different. Therefore our team recommends that you consult a trained engineer responsible for managing your F5 load balancer. We also highly recommend testing in a sandbox or development environment before deploying any changes to your production environment.  Results  If properly configured, the iRule matches a condition for your application-specific traffic, the load balancer will inject the EUM-specific source into the response received by the browser allowing the Javascript agent to load in the browser, capture EUM data, and send the associated beacons back to the EUM Server. <script> window["adrum-app-key"] = "AAB-AA-AUA"; window["adrum-start-time"] = new Date().getTime(); </script> <script src="https://cdn.appdynamics.com/adrum/adrum-latest.js"></script>
Users can debug a browser page with EUM instrumented pages using the JavaScript agent (adrum.js). Within the Google Chrome browser, launch developer tools (ALT+CMD+I) by clicking on View > De... See more...
Users can debug a browser page with EUM instrumented pages using the JavaScript agent (adrum.js). Within the Google Chrome browser, launch developer tools (ALT+CMD+I) by clicking on View > Developer > Developer Tools in the browser toolbar. Add the string ? ADRUM_debug=true to your webpage's URL and hit enter. Example:   http://URL/page/?ADRUM_debug=true Note: if there is an anchor (#) in the URL, add the above parameter string before the anchor.   Example: http://URL/page/?ADRUM_debug=true#afterAnchor   3. In the browser window, generate load on your web application virtual pages. Note: The easiest way to generate load is by clicking the various links within the browser window which point to the different virtual page(s). This allows for the adrum.js JavaScript agent to capture data.   4. Once a load has been generated on the application, type the following into the Javascript console: ADRUM.dumpLog()  *for v4.2.x and earlier try, ADRUM.logMessages.join("\n") Users can then view their Javascript logs in the console and use for debugging purposes (if needed). Related Links: Customize the Javascript agent Browser real user monitoring 
Users can monitor all of the drives on their servers using the machine agent and receive alerts when drives have low disk space based on both a percentage and also a hard limit.  Add a condition... See more...
Users can monitor all of the drives on their servers using the machine agent and receive alerts when drives have low disk space based on both a percentage and also a hard limit.  Add a condition to check for a percentage of disk space in use From the controller UI, click on the "Alert & Respond" tab.     Click on "Health Rules" in the left navigation bar. Use the drop-down menu to select "Servers." Use the plus sign to create a new health rule. Select type "Node Health - Hardware, JVM, CLR (CPU, heap, disk I/O, etc)." Tell the controller how many minutes of data to use when evaluating the health rule by entering a number. The following example uses 5 minutes of data. This will prevent a temporary process from triggering the alert. The health rule should affect all nodes within the machine agent tier. Note: Failure to select "Machine Agent" from the list of selected tiers will result in multiple alerts per server for each tier. Make the following selections (as shown in the screenshot below), then click the next button to continue.     Set up the percentage condition. In the following example, the health rule will send out a critical alert when the drive is over 90% full. Click the blue "Edit Expression" link to add a mathematical expression using declared variables. In this example, the variables SpaceUsed and SpaceAvail have been created and are used in the following formula to create a percentage of disk space in use:   ({SpaceUsed} / ({SpaceAvail} + {SpaceUsed})) * 100    Add another condition to also check for a hard limit In this example, the controller sends an alert when disk space drops below 5 GB (5000000). Notice in the above screenshots, the alerts are triggered only if all of the conditions have been met. This prevents false alerts. Users can select "Any" instead of "All" from the drop-down menu if they so choose. Users need to create similar health rules for each drive used on their servers. The above example is specifically for the C: drive.
Symptoms In the controller UI, iOS crashes are not symbolicated by default. To get human-readable crash stack traces that can be easily understood, use a platform-specific mapping file that can tra... See more...
Symptoms In the controller UI, iOS crashes are not symbolicated by default. To get human-readable crash stack traces that can be easily understood, use a platform-specific mapping file that can translate the raw data into human readable output. iOS users can review the below diagnosis and solution. For Android, see "Upload the ProGuard Mapping File" in Instrument an Android Application - Manual. Diagnosis iOS crash snapshots can be symbolicated using the debug symbols (dSYM) file. The crash snapshots are processed only once by the EUM Server, so if the dSYM file is not present at the time of processing, the snapshots will not be symbolicated. Crash report example:  Binary Images: 0xd8000 - 0x12ffff+ECommerce-iOS armv7 <feaae512eb613a75a33dee253fa87bb6> /var/mobile/Containers/Bundle/Application/C64FEBB0-EF8E-4079-BDE0-E35C531F0CA2/ECommerce-iOS.app/ECommerce-iOS In the above example, the string feaae512eb613a75a33dee253fa87bb6  is the universally unique identifier (UUID) of the application. To symbolicate the crash reports, upload the dSYM file with the corresponding UUID. The dSYM file was generated when the application was built. If you rebuild the application later, it almost always will have a new and different UUID. It is possible to set up your environment to upload the file automatically each time you build. Solution For crash reports to be symbolicated, the corresponding dSYM files need to be uploaded to AppDynamics. Users can either set up their environment to upload the file automatically each time they build or choose to upload the file manually. Crashes that have already been processed will not be symbolicated retroactively, but all future reported crashes will be symbolicated.  Note: Crash snapshots are processed only once by the EUM Processor, therefore if the dSYM file is missing during the time of processing the snapshot, it won’t be symbolicated. Even if the user uploads the dSYM file later, the EUM Processor will not retroactively process these snapshots.
Use the following recommendations if high CPU usage occurs after attaching the AppDynamics agent: Disable aggressive slow snapshot collection Exclude specific hotspot interceptors Async Ins... See more...
Use the following recommendations if high CPU usage occurs after attaching the AppDynamics agent: Disable aggressive slow snapshot collection Exclude specific hotspot interceptors Async Instrumentation Disable unwanted instrumentation from transaction detection Turn off Turbo custom exit point interceptor 1. Disable aggressive slow snapshot collection If aggressive slow snapshot collection is enabled, the agent will retain selected call graph segments that precede the detection of a slow, stall or error transaction condition. Disabling the aggressive slow snapshot collection will significantly reduce overhead. In the Controller UI, click on Configuration in the left navigation bar. Click the double arrows in the top right to expand the menu. Click on Call Graph Settings. Uncheck the checkbox next to "Enable Aggressive Slow Snapshot Collection." Click the save button.    2. Exclude specific hotspot interceptors  Editing the node level property "exclude-interceptors" to exclude hotspots will reduce CPU usage. To determine if your agent has enabled hotspots, search the agent.* log files for "Enabled hotspots" and search the ByteCodeTransformer*.log for "Applying method interceptor diag.snapshot.BoundHotspot." Edit a node property: In the controller UI, click on Tiers and Nodes in the left navigation bar. Double click on the node you'd like to configure, and click Agents > App Server Agent > Configure. This will open the App Server Configuration window.  At the node level, click on the "Use Custom Configuration" button. Either search for the exclude-interceptors property and double click on it, or click on the gray plus sign to create a new agent property if the exclude-interceptors property does not already exist. Click the save button after adding the new configuration. A "saved successfully" message will appear in the top right corner of the window.  Name - exclude-interceptors Description - exclude-interceptors Type - String Value - com.singularity.BoundHotspotInterceptor Note: If there are already some class names added to this property, use a comma between them. Example: 3. Async Instrumentation Too many async interceptors can cause high CPU utilization. You may have unnecessary application framework classes instrumented that are not needed for visibility or monitoring. Get a list of classes/methods which are applying async interceptor by looking at the  ByteCodeTransformer*.log or search "async.handoffAsyncHandOffExecutionTracker." Exclude the packages 1-1 to improve CPU utilization. The package/class can be excluded either from fork-config OR using node property. Fork-config changes (recommended) - Requires JVM restart. Navigate to <agent_install_dir>/<ver4.X.x>/conf/app-agent-config.xml . Search for <fork-config> keyword. Add the desired packages to be excluded.  Example: <excludes filter-type="STARTSWITH" filter-value="<package/>"/> or fully qualified class name Few Examples: <excludes filter-type="STARTSWITH" filter-value="com.arjuna/"/> <excludes filter-type="STARTSWITH" filter-value="com.netflix/"/> <excludes filter-type="STARTSWITH" filter-value="com.bea/,com.weblogic/,weblogic/,com.ibm/,net/sf/,com/mchange"/> Node Property - See step 2 to create or edit a node property.   Name: thread-correlation-classes-exclude Description: thread-correlation-classes-exclude Value:Type: <fully.qualified.package.ClassName>,<package.name>.* Note: Excluding packages and classes could result in loss of visibility.    4. Disable unwanted instrumentation from transaction detection Spring Bean and EJB interceptors are CPU intensive. If you are not looking for any spring or EJB entry points, disable Transaction Detection.  From the controller UI, click on Applications. Click Configuration in the left navigation. Under the Transaction Detection tab, uncheck both of the checkboxes next to Spring Bean and EJB to disable them. 5. Turn off Turbo custom exit point interceptor  Turbo exit points are special interceptors that handle a high volume of calls. To determine if these interceptors are causing high CPU usage, search the  ByteCodeTransformer*.log for "exit.TurboCustom." If the classes are not adding visibility, exclude them and disable the interceptors by creating node-level properties (see "Edit a node property" in step 2).   Name - exclude-interceptors Description - exclude-interceptors Type - String Value - com.singularity.TurboCustomExitPointInterceptor Name - disable-ootb-turbo-interceptors Description - disable-ootb-turbo-interceptors Type - boolean Value - true If these troubleshooting recommendations do not improve your high CPU usage, contact our support team for further assistance.
The requirements for different components in the AppDynamics platform are based on the performance profile you select. The following information describes the different performance profiles and how t... See more...
The requirements for different components in the AppDynamics platform are based on the performance profile you select. The following information describes the different performance profiles and how to determine the profile size you need. Network Considerations System User Account Operation System Support Internationalization Support Network Bandwidth Requirements More Information Network Considerations If your network or the host machine has built-in firewall rules, you will need to adjust them to accommodate the AppDynamics on-premises platform. Specifically, permit network traffic on the ports used in the system. For more information, see Port Settings. For expected bandwidth consumption for the agents, see the requirements documentation for app agents listed under Install App Server Agents. System User Account Install all platform components with a single user account or accounts that have equivalent permissions on the operating system. The user needs to have write permissions for the installation directory.  Operating System Support  Linux (64 bit) Microsoft Windows (64 bit) RHEL 6 and 7 CentOS 6 and 7 Ubuntu 14 and 16 Windows Server 2008 R2 Windows Server 2012 and 2012 R2 Windows Server 2016 You can use the following file systems for machines that run Linux: ZFS EXT4 XFS Internationalization Support  The Controller and App Agents provide full internationalization support, with support for double- and triple-byte characters. This support provides the following abilities: Controller UI users can enter double- or triple-byte characters into text fields in the UI The Controller can accept data that contains double- or triple-byte characters from instrumented applications Network Bandwidth Requirements See Administer App Server Agents for information on bandwidth usage in an AppDynamics deployment. More Information For requirements that are specific by-product component, see the following pages: Controller System Requirements Events Service Requirements EUM Server Requirements
You can embed a Twitter timeline into a custom AppDynamics dashboard using widgets. To get started, you will need to create a Twitter account.  Create a custom widget within Twitter:   ... See more...
You can embed a Twitter timeline into a custom AppDynamics dashboard using widgets. To get started, you will need to create a Twitter account.  Create a custom widget within Twitter:   Log into your Twitter account and navigate to the widgets page. Click on the "Create widget" drop-down menu, choose your timeline source (e.g. Search), and customize your widget. Example: Copy the HTML that Twitter has created, and paste into an HTML template file using a text editor.  Example:  <html> <style>html,body{margin:0;padding:0;width:100%;height:100%;}</style> <body> <a class="twitter-timeline" href="https://twitter.com/search?q=%40_AppDynamics%20-from%3A%40_AppDynamics%20-filter%3Aretweets" data-widget-id="719433085675249665" width="97%" height="2000">Tweets über @_AppDynamics -from:@_AppDynamics -filter:retweets </a> <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+"://platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script> </body> </html>    Follow Twitter's documentation about web widgets to make any changes, such as removing the footer and header on your timeline. Upload the HTML file to a public website of your choice (e.g. http://your.name/twitter-widget.html). Create an iFrame widget in your AppDynamics custom dashboard: In the controller UI, click the pencil button at the top of the AppDynamics dashboard to edit. Then click the "Add widget" button. The widget palette appears: Click on "Other Widgets" in the left navigation bar and select iFrame. In the "URL to display" field, paste the URL of your Twitter widget on the public website (e.g. https://your.name/twitter-widget.html).  Note: If your AppDynamics controller is accessible via HTTPS, you also have to use HTTPS for your iFrame. Otherwise, your browser will not load the widget. You can now view a Twitter timeline within your custom AppDynamics dashboard. Note: It is not possible to embed Javascript build widgets into the AppDynamic's dashboard and Twitter does not allow users to embed timelines or search queries into an iFrame by sending the X-Frame-Options response header with value SAME-ORIGIN. Related Links: Create and manage custom dashboards Configure widgets for a custom dashboard AppD University: Sign up for the "Custom Dashboards Best Practices and How-to Walkthroughs" self-paced course
As of 4.2.x, it is not possible to export a CSV file from the controller UI for all of the servers being monitored.   As a workaround, a user must capture the raw data and format it using Google ... See more...
As of 4.2.x, it is not possible to export a CSV file from the controller UI for all of the servers being monitored.   As a workaround, a user must capture the raw data and format it using Google Chrome browser, a text editor (our team recommends using TextWrangler or BBEdit, but any text editor with "Find and Replace" capability is fine), and your computer's terminal. The following instructions apply to any Unix-based operating system such as Mac OS X.  In Google Chrome, access the server monitoring user interface from the controller UI.   Launch Chrome Developer Tools (ALT+CMD+I) by clicking on View > Developer > Developer Tools in the browser toolbar. Click on the network tab of Developer Tools, then check the box labeled "Preserve Log".  Refresh the controller UI in the browser and network tab will populate with all the page requests. Once the page is fully loaded, search for a particular network request in the following format: https://[CONTROLLER-URL]/controller/sim/v2/user/machines?appIds=&tierIds=&nodeIds=&format=LITE   Double-click on the request to open a new tab with the desired data in a JSON format. Alternatively, you could select the response tab for this specific request to see the JSON data. Copy the JSON data and paste in a text editor. Use "Find and Replace" to change the data into a CSV spreadsheet using the following steps. Find "," and Replace with  "\n,\n" Find },{ and Replace with }\n,\n{\n Find {" and Replace with {\n" Save the file in the text editor as hosts.txt Open a terminal window and run the following command: cat hosts.txt | grep hostId | sed 's/"hostId":"//g' | sed 's/"/,/g' > hosts.csv The resulting hosts.csv file will contain a column of hostIDs from your controller's list of servers.
To troubleshoot complex issues in web downloads, it can be helpful to have the download process tracked in an HTTP Archive (HAR) file, an archival format for recording HTTP transactions using a brows... See more...
To troubleshoot complex issues in web downloads, it can be helpful to have the download process tracked in an HTTP Archive (HAR) file, an archival format for recording HTTP transactions using a browser. The workflow for creating a HAR file differs for each browser. This Google App outlines the ways to generate HAR files in different browsers: https://toolbox.googleapps.com/apps/har_analyzer/ Last Updated: 11/9/18
Problem  I want to enable/disable synthetic jobs programmatically in order to automate the process during the planned downtimes so that false alerts are not generated. Solution For dealing wit... See more...
Problem  I want to enable/disable synthetic jobs programmatically in order to automate the process during the planned downtimes so that false alerts are not generated. Solution For dealing with synthetic jobs, use the  https://api.eum-appdynamics.com  endpoint to make changes to the schedule, and either enable or disable jobs. To use this API, use your username and password for authentication. The username should be the EUM account name, and the password should be the EUM license key. These can be found on the licensing page in the controller UI by clicking on the gear menu in the upper right corner of the controller UI, clicking on license and then scrolling down to "End User Monitoring" as seen in the screenshot below.  The APIs below can be used to get and modify the schedules:   Get a list of all the synthetic jobs available for the authenticated user: GET https://api.eum-appdynamics.com/v1/synthetic/schedule  Get the exact job identified by the schedule ID. To find the schedule ID and description, use the previous request to get a list of all jobs: GET https://api.eum-appdynamics.com/v1/synthetic/schedule/<schedule_id> Modify the schedule according to the new schedule object provided in the body: PUT https://api.eum-appdynamics.com/v1/synthetic/schedule/<schedule_id>​ Use the response body from the previous request to get the exact job using the schedule ID as the JSON body. The field "userEnabled" controls whether the job is enabled or disabled, and will show the same result as enabling/disabling from the UI. Sample curl implementation Get a particular schedule  Modify the given schedule  Create your own script  Enable a particular job  Disable a particular job  synth.py  Get all schedules curl -XGET --user AppDynamics-a0Q340ABCDEABCDEAV:9ba9bd3b-652d-abcd-abcd-abcdefecd4f0 https://api.eum-appdynamics.com/v1/synthetic/schedule Output: { "_first": null, "_items": [ { "_id": "fcedd1d0-88bc-49c1-9bbe-397227616d1f", "appKey": "AD-AAB-WWW-WWW", "browserCodes": [ "Chrome" ], "captureVisualMetrics": true, "created": "2016-11-02T04:07:22.764Z", ..... Get a particular schedule curl -XGET --user AppDynamics-a0Q340ABCDEABCDEAV:9ba9bd3b-652d-abcd-abcd-abcdefecd4f0 https://api.eum-appdynamics.com/v1/synthetic/schedule/fcedd1d0-88bc-49c1-9bbe-397227616d1f 2>/dev/null|python -m json.tool { "_id": "fcedd1d0-88bc-49c1-9bbe-397227616d1f", ..... "userEnabled": false, "version": 13 } Modify the given schedule curl -XPUT --user AppDynamics-a0Q340ABCDEABCDEAV:9ba9bd3b-652d-abcd-abcd-abcdefecd4f0 -H"Content-Type: application/json" -d '{ "_id": "fcedd1d0-88bc-49c1-9bbe-397227616d1f", ..... "userEnabled": true, "version": 13 }' https://api.eum-appdynamics.com/v1/synthetic/schedule/fcedd1d0-88bc-49c1-9bbe-397227616d1f Create your own script Use the following Python implementation as a starting point to create your own script. osxltmkshi:analytics-agent mayuresh.kshirsagar$ python ~/Desktop/synth.py --help usage: synth.py [-h] -n <eum_account_name> -l <eum_license_name> [-u <eum_url>] -j <synth_job_name> [-e] optional arguments: -h, --help show this help message and exit -n <eum_account_name>, --eumaccountname <eum_account_name> EUM Account Name -l <eum_license_name>, --eumlicensekey <eum_license_name> EUM License Key -u <eum_url>, --url <eum_url> EUM Server URL, Defaults to https://api.eum- appdynamics.com -j <synth_job_name>, --job <synth_job_name> Job Name -e, --enable Enable Job - If present marks the job enabled. If absent, marks the job disabled Enable a particular job python ~/Desktop/synth.py -n AppDynamics-a0Q340ABCDEABCDEAV -l 9ba9bd3b-652d-abcd-abcd-abcdefecd4f0 -j 90220 -e Disable a particular job python ~/Desktop/synth.py -n AppDynamics-a0Q340ABCDEABCDEAV -l 9ba9bd3b-652d-abcd-abcd-abcdefecd4f0 -j 90220 synth.py Click "Learn More" to view the  synth.py file Spoiler (Highlight to read) # -*- coding: utf-8 -*- import requests import argparse import sys import json def find(list, filter): for x in list: if filter(x): return x return None def main(): accountname=None licensekey=None url=None jobname=None enable=False getallschedules="/v1/synthetic/schedule" scheduleendpoint="/v1/synthetic/schedule/" parser = argparse.ArgumentParser() parser.add_argument("-n", "--eumaccountname", help="EUM Account Name", required=True, metavar='<eum_account_name>') parser.add_argument("-l", "--eumlicensekey", help="EUM License Key", required=True, metavar='<eum_license_name>') parser.add_argument("-u", "--url", help="EUM Server URL, Defaults to https://api.eum-appdynamics.com", default="https://api.eum-appdynamics.com", required=False, metavar='<eum_url>') parser.add_argument("-j", "--job", help="Job Name", required=True, metavar='<synth_job_name>') parser.add_argument("-e", "--enable", help="Enable Job - If present marks the job enabled. If absent, marks the job disabled", default=False, action="store_true", required=False) args = parser.parse_args() accountname=args.eumaccountname licensekey=args.eumlicensekey url=args.url jobname=args.job enable=args.enable # Get all the schedules response = requests.get(url + getallschedules,auth=(accountname,licensekey)) if response.status_code != 200: print "Error Occurred. Status Code: " + str(response.status_code) + " Response: " + json.dumps(json.loads(response.text),indent=3) sys.exit(1) _json = json.loads(response.text) _items= _json['_items'] _return = find(_items, lambda x: x['description'] == jobname) scheduleid=_return['_id'] # Get the individual schedule matching the description response = requests.get(url + scheduleendpoint + scheduleid,auth=(accountname, licensekey)) if response.status_code != 200: print "Error Occurred. Status Code: " + str(response.status_code) + " Response: " + json.dumps(json.loads(response.text),indent=3) sys.exit(1) _json = json.loads(response.text) _json['userEnabled'] = enable _json=json.dumps(_json, indent=3) print "Request: " + _json headers = {'Content-type': 'application/json'} # Modify the schedule response = requests.put(url + scheduleendpoint + scheduleid, auth=(accountname, licensekey), headers=headers, data=_json) if response.status_code != 200: print "Error Occurred. Status Code: " + str(response.status_code) + " Response: " + json.dumps(json.loads(response.text),indent=3) sys.exit(1) _json = json.loads(response.text) _json=json.dumps(_json, indent=3) print "Response: " + _json if __name__ == "__main__": main() # -*- coding: utf-8 -*- import requests import argparse import sys import json def find(list, filter): for x in list: if filter(x): return x return None def main(): accountname=None licensekey=None url=None jobname=None enable=False getallschedules="/v1/synthetic/schedule" scheduleendpoint="/v1/synthetic/schedule/" parser = argparse.ArgumentParser() parser.add_argument("-n", "--eumaccountname", help="EUM Account Name", required=True, metavar='<eum_account_name>') parser.add_argument("-l", "--eumlicensekey", help="EUM License Key", required=True, metavar='<eum_license_name>') parser.add_argument("-u", "--url", help="EUM Server URL, Defaults to https://api.eum-appdynamics.com", default="https://api.eum-appdynamics.com", required=False, metavar='<eum_url>') parser.add_argument("-j", "--job", help="Job Name", required=True, metavar='<synth_job_name>') parser.add_argument("-e", "--enable", help="Enable Job - If present marks the job enabled. If absent, marks the job disabled", default=False, action="store_true", required=False) args = parser.parse_args() accountname=args.eumaccountname licensekey=args.eumlicensekey url=args.url jobname=args.job enable=args.enable # Get all the schedules response = requests.get(url + getallschedules,auth=(accountname,licensekey)) if response.status_code != 200: print "Error Occurred. Status Code: " + str(response.status_code) + " Response: " + json.dumps(json.loads(response.text),indent=3) sys.exit(1) _json = json.loads(response.text) _items= _json['_items'] _return = find(_items, lambda x: x['description'] == jobname) scheduleid=_return['_id'] # Get the individual schedule matching the description response = requests.get(url + scheduleendpoint + scheduleid,auth=(accountname, licensekey)) if response.status_code != 200: print "Error Occurred. Status Code: " + str(response.status_code) + " Response: " + json.dumps(json.loads(response.text),indent=3) sys.exit(1) _json = json.loads(response.text) _json['userEnabled'] = enable _json=json.dumps(_json, indent=3) print "Request: " + _json headers = {'Content-type': 'application/json'} # Modify the schedule response = requests.put(url + scheduleendpoint + scheduleid, auth=(accountname, licensekey), headers=headers, data=_json) if response.status_code != 200: print "Error Occurred. Status Code: " + str(response.status_code) + " Response: " + json.dumps(json.loads(response.text),indent=3) sys.exit(1) _json = json.loads(response.text) _json=json.dumps(_json, indent=3) print "Response: " + _json if __name__ == "__main__": main() Related Links Create scripts for synthetic jobs Synthetic scripts FAQ
1. To receive an email notification when a member of your organization issues a new support request, first log in to your AppDynamics account. 2. Visit the support page at help.appdynamics.com a... See more...
1. To receive an email notification when a member of your organization issues a new support request, first log in to your AppDynamics account. 2. Visit the support page at help.appdynamics.com and click on "Support Portal" link as shown below: 3. Click on "Organization Requests" to view your team's requests.       4. Click on the follow button to subscribe to new support requests from members of your organization 5. To unsubscribe from support notifications, click the unfollow button.
If you have changed your SQL capture settings from filter parameter values to capture raw SQL, longer SQL queries are now truncated to 999 characters. To change this character limit within the Contro... See more...
If you have changed your SQL capture settings from filter parameter values to capture raw SQL, longer SQL queries are now truncated to 999 characters. To change this character limit within the Controller, edit or register a new node property. In the Controller UI, click on Tiers and Nodes in the left navigation bar. Then double click on the node you'd like to configure, and click Agents > App Server Agent > Configure. This will open the App Server Configuration window.  At the node level, click on the Use Custom Configuration button. Then either search for the  max-length-batch-sql property and double click on it, or click on the gray plus sign to create a new agent property if the  max-length-batch-sql property does not already exist. In the Create Agent Property window, provide values for the name, description, type, and value of the new property (see example and screenshot below). Click the save button to close the window. Then click the save button after adding the new configuration. A "saved successfully" message should appear in the top right corner of the window when done correctly. Lastly, exit the App Server Configuration window and click the reset button. This will allow the agent to inherit the new changes. For additional information, see documentation on how to add a registered node property. Example: Name = max-length-batch-sql Description = Increases character limit for SQL queries Type = Integer Value = 2000  
Users with hundreds of EMS queues in the EMS Server instance may wish to group their metrics using regular expression into one node or graph for analysis.  It is not possible to group metrics in t... See more...
Users with hundreds of EMS queues in the EMS Server instance may wish to group their metrics using regular expression into one node or graph for analysis.  It is not possible to group metrics in the custom dashboard, given that the tier/app level metrics for indivual EMS queues at node level is not available. As a workaround, AppDynamics supports wild card use within the rest API, so a user could manipulate the API response to view custom metric data. Visit our documentation to see how to use the AppDynamics Rest API, which will describe how to navigate to the metric browser, select "copy rest URL" on one of the target metrics at the same metric folder, and then paste the URL in another browser tab and change common strings in the URL to an * asterisk. This should result in a response of each queue which the user can utilize as a custom metric. Related Links: Extensions and Custom Metrics  Java SDK for Controller Rest API