All TKB Articles in Learn Splunk

Learn Splunk

All TKB Articles in Learn Splunk

When you will get Error and Error/Min Metrics for Information Point Whenever there is unhandled exception and you are creating an information point on that Class/Method, it will collect Error and E... See more...
When you will get Error and Error/Min Metrics for Information Point Whenever there is unhandled exception and you are creating an information point on that Class/Method, it will collect Error and Error/Min metrics for that information point. The code snippet will look like as mentioned below:       public void getResult () throws NumberFormatException{            String name = "abc";            Integer.parseInt((name));          } In the above case, only the information point will show you Error and Error/Min metrics and you will not get any Error and Error/Min metric for a BT if you created a POJO Custom BT on same Class/Method. When you will not get Error and Error/Min Metrics for Information Point Whenever you are catching those exception using logger, you will get Error and Error/Min metrics for the BT, not for the information point. The code snippet will look like this:     public void getResult() {     try {          String name = "abc";          Integer.parseInt((name));         } catch (NumberFormatException e) {           logger.log(Level.SEVERE, e.getMessage());         }       } In above case, you will get Error and Error/Min metrics for the BT if you created either a POJO Custom BT or Default BT, but if you created an information point of the same Class/Method, it will not show you Error and Error/Min metrisc.
The purpose of this article is to show you how to capture thread dumps on different OS like Windows & Unix when the EUM Server Processor becomes unresponsive or gets hung for some reason. So how... See more...
The purpose of this article is to show you how to capture thread dumps on different OS like Windows & Unix when the EUM Server Processor becomes unresponsive or gets hung for some reason. So how do we know at the first place that the EUM Server Processor is indeed hung/Unresponsive? 1. If the OS is unix,verify if eum server is running using the command  [ps -ef | grep -i eum] and you should see something like below if the process exist. ps -ef | grep -i eum APPD 46167 1 5 Nov18 ? 04:27:34 /opt/appdynamics/eum/eum-processor/../jre/bin/java -server -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=50 -XX:+HeapDumpOnOutOfMemoryError -XX:NewRatio=1 -Xms1024m -Xmx4096m -DEUM_COMPONENT=processor -Dlogback.configurationFile=bin/logback.xml -Dcom.mchange.v2.c3p0.cfg.xml=bin/c3p0.xml -Dorg.owasp.esapi.resources=bin -classpath /opt/appdynamics/eum/eum-processor/lib/* com.appdynamics.eumcloud.EUMProcessorServer 2. On Windows,verify it using the taskmanager/process explorer to know if the process exist for the eum server or not. 3. Then verify using the netstat command as given below to see if the EUM Server Process shows up in LISTEN state    On Unix: netstat -an | grep 7001 Example: $ netstat -an | grep 7001 tcp6       0      0 :::7001                 :::*                    LISTEN      netstat -an | grep 7002 tcp6       0      0 :::7002                 :::*                    LISTEN   On windows:Use the below command netstat -an | findstr 7001 netstat -an | findstr 7002  4. Finally, try pinging the eum-server using the below url either from a browser or using curl utility. http://<EUM_SERVER_HOST>:7001/eumaggregator/ping http://<EUM_SERVER_HOST>:7001/eumcollector/ping OR curl -kv http://<EUM_SERVER_HOST>:7001/eumcollector/ping For https/SSL https://<EUM_SERVER_HOST>:7002/eumaggregator/ping https://<EUM_SERVER_HOST>:7002/eumcollector/ping OR curl -kv https://<EUM_SERVER_HOST>:7002/eumcollector/ping 5. If you get a "ping" response in the browser window or"200 HTTP status code OK" using curl, it means you are able to connect to your EUM Server successfully and its not hung and if there's no response or browser keeps spinning and eventually the connection gets reset,then it indicates that the eum server process is not responding to your request and hence we need to capture few set of thread dumps( 3 to 5) with a frequency of  30 secs to 60 secs apart,so that we can later compare the successive thread dumps and verify if any threads of the eum server process was hung.   If your On-Prem EUM Server Processor is running on windows,then below are the available options to capture thread dumps. 1. Ctrl+Break key combination: On Windows, the combination of pressing the Ctrl key and the Break key at the application console (standard input) causes the VM to print a thread dump to the EUM Server's standard output. Note: This is helpful when your eum processor server is running in the foreground but not in the background. 2. JStack utility: jstack is available only in JDK . It is not available in the JRE and hence we have to install the JDK in order to use jstack. jstack prints the stack traces of all Java threads for a given Java process. jstack [option] pid Reference:http://download.oracle.com/javase/6/docs/technotes/tools/share/jstack.html 3. Third-Party utility: http://www.latenighthacking.com/projects/2003/sendSignal/   If your On-Prem EUM Server Processor is running on Unix,then below are some of the available options to capture thread dumps. 1. kill command Using the kill command and the output will be stored or redirected to the stdout of the process and usually,it will be nohup.out file. kill -3 <pid> or kill QUIT <pid> 2. JStack utility: jstack is available only in JDK . It is not available in the JRE and hence we have to install the JDK in order to use jstack. jstack prints the stack traces of all Java threads for a given Java process. jstack [option] pid or jstack <pid> > threadump1.txt [Redirecting the output to threadump1.txt] Reference:http://download.oracle.com/javase/6/docs/technotes/tools/share/jstack.html    Thread Dump Location: 1. If the thread dump was collected using kill -3 <pid>,then the dump would be in stdout as mentioned earlier For example: /home/appdynamics/EUM/eum-processor/bin/nohup.out   2. if the thread dump was colledted using Jstack <pid> > output.txt,then the output.txt will contain the necessary dump.   Along with the thread dumps collected, You need to upload the EUM Processor logs and its properties file to your support case for the AppD Support team to review and find out the reason for the unresponsiveness.   Log Location:  /home/appdynamics/EUM/logs/eum-processor.log   Properties Location: /home/appdynamics/EUM/eum-processor/bin/eum.properties     
Symptoms Transaction Analytics stops getting data even when all configuration is in place. When the analytics-agent.log  is checked, the following stacktrace could be captured: [ERROR] [dw-... See more...
Symptoms Transaction Analytics stops getting data even when all configuration is in place. When the analytics-agent.log  is checked, the following stacktrace could be captured: [ERROR] [dw-15387 - POST /v2/sinks/bt] [c.a.a.p.http.AbstractPostReceiver] Request could not be processed as the input queue is full ... [c.a.a.a.p.e.EventServicePublishStage] Transient error encountered due to the following cause: [Message could not be delivered because the REST resource rejected it] com.appdynamics.analytics.shared.rest.exceptions.NotAcceptableRestException: For action [EVENT_UPSERT], you have reached the documents limit of [500000] for account [Account_name] and event type group [biz_txn_v.*] at com.appdynamics.analytics.shared.rest.exceptions.RestExceptionFactory.makeException(RestExceptionFactory.java:47) ~[analytics-shared-rest.jar:na] Once day limit is revised the data starts showing. Diagnosis This issue occurs because the number of transaction events reported in Transaction Analytics is more than 500,000 per unit (up to version 4.2.9) or 1,000,000 per unit (from version 4.2.9 and beyond).  The number of transactions is governed by your license, which sets the daily per-unit limit. Once the daily limit is reached, no further events will be stored. Analytics License basis and entitlements by version Analytics licenses are based on the following: Volume of data:  Transaction Analytics: Measured as a specific number of Business Transaction events.   Data retention time Entitlements for v4.2.9 and later The following entitlements apply to the AppDynamics software versions 4.2.9 and later: AppDynamics for Transaction Analytics (SaaS) Instrument 1,000,000 business transaction events per 24-hour period and access to the AppDynamics-hosted Events Service, with a data maximum of 50 GB per account per day. Data retention limited to 8 days. Additional retention of 30, 60, or 90 days available as an add-on. AppDynamics for Transaction Analytics (on-premises) Instrument 1,000,000 business transaction events per 24-hour period (limited to 90 days of data storage). Customer is not entitled to access the AppDynamics-hosted Events Service. Entitlements up to v4.2.9 The following entitlements apply up to v4.2.9 AppDynamics for Transaction Analytics (SaaS) Instrument 500,000 business transaction events per 24-hour period and access to the AppDynamics-hosted Events Service, with a data maximum of 50 GB per account per day. Data retention limited to 8 days. Additional retention of 30, 60, or 90 days available as an add-on. AppDynamics for Transaction Analytics (on-premises) Instrument 500,000 business transaction events per 24-hour period (limited to 90 days of data storage). Customer is not entitled to access the AppDynamics-hosted Events Service. Solution That's the reason you see those consistent gaps which start showing data once the day limit is revised. To resolve this you can get a license for higher unit limits, which will log higher number of events. You can also try reducing your events data.
Symptoms While accessing (Database monitoring) collector data (or other tabs, such as Queries or Sessions) you may encounter one of the following errors:  Case 1 Error We aren't able ... See more...
Symptoms While accessing (Database monitoring) collector data (or other tabs, such as Queries or Sessions) you may encounter one of the following errors:  Case 1 Error We aren't able to load data from the event service. If this is a SaaS controller, please open a support ticket. If this is on-premise installation, please make sure that the event service is running. See the documentation for instructions on starting the event service. For more details on this error, please see the controller server log. Error Error occurred while getting wait state information Case 2 Error The server encountered an internal error () that prevented it from fulfilling this request Error Error occurred while getting wait state information Diagnosis Try to reproduce the issue Look in controller/logs/server.log to see if the error thrown in the server.log is something similar to the following: Case 1 [#|2016-11-09T13:03:57.996-0800|SEVERE|glassfish3.1.2|com.sun.jersey.spi.container.ContainerResponse|_ThreadID=185;_ThreadName=Thread-5;|The RuntimeException could not be mapped to a response, re-throwing to the HTTP container com.appdynamics.analytics.shared.rest.exceptions.ClientException: Could not execute request to http://<events-service-host>:9080/v2/events/dbmon-wait-time/search  Root cause: "request timeout" Case 2 [#|2016-10-25T06:06:40.023-0500|SEVERE|glassfish3.1.2|com.singularity.ee.controller.beans.analytics.client.AccountCreatingAnalyticsClient|_ThreadID=221;_ThreadName=Thread-5;|Could not find account [customer1_6819814d-14f6-46e9-8b2c-53d0770655f9] via lookup. java.lang.reflect.InvocationTargetException Root cause:  RestException(statusCode=401, code=Auth.Unauthorized, message=The supplied auth information is incorrect or not authorized., developerMessage=) Solutions When diagnosis indicates Case 1 Immediately confirm whether the local Events Service locally on the Controller host is running or stopped. NOTE | It may also be that the Events Service in use is configured on another dedicated host machine. Just make sure the Events Service is running and that it can be connected from the Controller. The easy test would be  $> curl http://<events-service-host>:port/healthcheck?pretty=true If it is stopped, start it as follows:  $> cd Controller/bin $> ./controller.sh start-events-service​ When diagnosis indicates Case 2 This issue usually arises because either of these Controller properties (through which database monitoring is configured) have the following issues: appdynamics.analytics.local.store.controller.key   (the Controller key mapped to the Events Service in use) The Controller settings for the property has a bad key appdynamics.analytics.local.store.url (the Events Service URL to which the Controller will connect to fetch DB monitoring events data) The Controller setting points the property to an incorrect Events Service What do I do if I am using the Events Service that accompanies the Controller package? appdynamics.analytics.local.store.url: set to http://localhost:9080  appdynamics.analytics.local.store.controller.key: leave the property as the default key (pre-populated from the time Controller installed or upgraded) What do I do if I am using the external Events Service? appdynamics.analytics.local.store.url: set the same as appdynamics.analytics.server.store.url appdynamics.analytics.local.store.controller.key: set the same as appdynamics.analytics.server.store.controller.key ad.accountmanager.key.controller in events-servcie/conf/events-service-api-store.properties: set the same as appdynamics.analytics.server.store.controller.key
Why am I seeing :  "EUM Account <EUM_ACCOUNT_NAME>with key<EUM_ACCOUNT_KEY>could not be provisioned in the EUM PROCESSOR"? Symptoms When trying to re-provision the on-premise EUM license, th... See more...
Why am I seeing :  "EUM Account <EUM_ACCOUNT_NAME>with key<EUM_ACCOUNT_KEY>could not be provisioned in the EUM PROCESSOR"? Symptoms When trying to re-provision the on-premise EUM license, the above error message is seen. ./bin/provision-license <path_to_license_file>   Complete Stacktrace: Provisioning license from license file Unable to add global account names to accounts table for eum_account:<EUM_ACCOUNT_NAME> java.sql.SQLException: Connections could not be acquired from the underlying database!         at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:118)         at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:529)         at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(AbstractPoolBackedDataSource.java:128)         at com.appdynamics.eumcloud.data.sql.SqlDBManager.getConnection(SqlDBManager.java:69)         at com.appdynamics.eumcloud.OnPremLicenseProvisioner.getGlobalAccountNameFromControllerDb(OnPremLicenseProvisioner.java:127)         at com.appdynamics.eumcloud.OnPremLicenseProvisioner.provisionFromLicenseFile(OnPremLicenseProvisioner.java:157)         at com.appdynamics.eumcloud.OnPremLicenseProvisioner.main(OnPremLicenseProvisioner.java:50) Caused by: com.mchange.v2.resourcepool.CannotAcquireResourceException: A ResourcePool could not acquire a resource from its primary factory or source.         at com.mchange.v2.resourcepool.BasicResourcePool.awaitAvailable(BasicResourcePool.java:1319)         at com.mchange.v2.resourcepool.BasicResourcePool.prelimCheckoutResource(BasicResourcePool.java:557)         at com.mchange.v2.resourcepool.BasicResourcePool.checkoutResource(BasicResourcePool.java:477)         at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:525)         ... 5 more AccountRegistrationResult: isValid:false, isAlreadyRegistered:true, description:Account <EUM_ACCOUNT_NAME> already registered, EUM Account <EUM_ACCOUNT_NAME> with key <EUM_ACCOUNT_KEY>  could not be provisioned in the EUM PROCESSOR, error : Account <EUM_ACCOUNT_NAME> already registered Diagnosis This issue occurs as the EUM account name present in the new on-premise license file is already present inside the EUM DB. Solution To resolve this issue, please follow the following steps: Make a copy of your license file in <EUEM>/eum-processor/bin Make sure your <EUEM>/eum-processor/eum.properties has your controller root password for the value of onprem.controllerDbPassword Navigate to <controller>/bin and enter ./controller.sh login-db Enter use eum_db; Enter delete from accounts Enter exit From the eum-processor directory, run the following script: On Linux:  ./bin/provision-license <path_to_license_file>   On Windows: bin\provision-license.bat <path_to_license_file> Restart EUM processor: from <EUEM>/eum-processor enter .bin/eum.sh stop and .bin/eum.sh start Disable and re-enable EUM from Instrumentation/Configuration/End User Monitoring Once this is done, the updated license should reflect under License page and EUM data should start flowing again.
Symptoms The EUM processor is configured for https and the https port is specified in the configuration. After starting the EUM server, the process comes up on http port but it does not listen on... See more...
Symptoms The EUM processor is configured for https and the https port is specified in the configuration. After starting the EUM server, the process comes up on http port but it does not listen on the https port. Diagnosis Any process needs a keystore to be able to perform a secure communication over https. We need to configure a key store and provide the entries in the eum properties file. If the property is not specified for key store or for any reason it is not able to access the key store, it will not start listening on the https port. Solution To ensure that EUM listens on the https port as well, ensure the below criterias are checked: 1. A proper keystore with valid certificates are created for the EUM processor. 2. The keystore files and password should be specified in the eum.properties file. Below are the properties that corresponds to the same: processorServer.keyStoreFileName=mycustom.keystore processorServer.keyStorePassword=mypassword 3. Ensure that the EUM processor has read and execute privileges on the keystore file. 4. The https port specified in the eum.properties file should be available for the EUM processor.
Prerequisites You have the following type(s) of license: Browser EUM: Lite Mobile EUM: Lite Synthetic EUM: Pro Problem When you try to create an EUM application for Synthetic Monitori... See more...
Prerequisites You have the following type(s) of license: Browser EUM: Lite Mobile EUM: Lite Synthetic EUM: Pro Problem When you try to create an EUM application for Synthetic Monitoring, you receive a "You do not have a license for Synthetic Monitoring" error. In reviewing the license, you only see the Browser and Mobile Licenses. The Synthetic License does not appear, because it has not been synced. Solution The "You do not have a license for Synthetic Monitoring" error indicates that the controller has not synced with the EUM Cloud for the Synthetic Licenses. The controller will need to be force-synced to the EUM cloud. Since the EUM App Creation wizard is throwing the error "You Do Not Have License for Synthetic Monitoring" you know you can't create the app from the EUM App Creation Wizard. Instead, go to an existing application in APM and enable EUM from there. This action will force the controller to sync to the EUM cloud. You will be able to use that application for synthetic EUM. Alternatively, once the sync resumes and you can use the EUM App Creation wizard as expected, you can return to it and disable EUM for that application. Go to an existing application in your APM Click Configuration in the sidebar Under Instrumentation, click the End User Monitoring tab Click the Enable End User Monitoring checkbox Click the Save button When completed, the  controller will force sync back to the EUM Cloud. You will also be able to create applications for Synthetic from the EUM App Creation Wizard. After this is done, you can either start using this EUM application for Synthetic, or you can create a new one from the EUM App Creation Wizard. If you don't want to use the app you just configured for Synthetic, return to its configuration panel following the above steps, but un-check its EUM checkbox. 
Symptoms Inside Analytics Advanced search using ADQL when a metric is created on search output, the metric data is not changed when the search query is changed hence reports different data. Fo... See more...
Symptoms Inside Analytics Advanced search using ADQL when a metric is created on search output, the metric data is not changed when the search query is changed hence reports different data. For example: Making a change to an Advanced Transaction Analytics Saved Search which had a saved Metric, and the Metric continued to return results of the old query. Specifically, going from a broad search: SELECT count(*) FROM bt_resp_time_4 WHERE Datacenter = 'XYZ' AND CustomerName = 'XYZ' AND PassFail = 'Pass'  to a more exclusive one: SELECT count(*) FROM bt_resp_time_4 WHERE Datacenter = 'XYZ' AND CustomerName = 'XYZ' AND ApplicationName = 'Web Store' AND BT = 'Login' AND PassFail = 'Pass' The results went from ~200 requests to ~20 per minute on the Advanced Query, but the counts for the saved Metric stayed the same. Diagnosis This behavior is seen because the metric data once created is persisted inside db and can't be modified only by just changing the associated search query.  Solution The only way to sort it out is to delete the Metrics using the old search query, and re-create them with the same names. This will enforce the new search query to persist in db with the same name.
Q1. If the mobile app (device) is offline for a long time, does the agent delete older data? Ans : Yes. Q2. If the mobile app (device) is offline for a long time, how long the agent have older ... See more...
Q1. If the mobile app (device) is offline for a long time, does the agent delete older data? Ans : Yes. Q2. If the mobile app (device) is offline for a long time, how long the agent have older data?Or how much older data size can the agent has? If there is any limitation, please tell me those limitations. Ans: The beacons are "persisted" on device storage, and will be retained as long as the app is installed on the device. The agent will currently store 200 beacons (data for one "event") on the device if it cannot sent them out. Older ones are dropped and newer ones are retained. Q3. The following is questions in the case that If the mobile app (device) is offline for a long time, the agent has a lot of data in memory. If the mobile app (device) changes to online, how does the agent send older data to the eum processor (EUM Cloud)? Ans: The beacons are "batched" and then gzipped and then sent. Q4. Does the agent send older data at once? Ans: All data, old and new, are sent in the "batch" on every attempt. Q5. Older data is subdivided and the agent sends those in several times? Ans: No, we just keep rebatching on every attempt. Q6. Ifthe mobile app (device) is offline for a long time, how much data size does the agent send? Ans: Each beacon is roughly 400 bytes, and can be longer depending on the URLs used. They are ASCII JSON so they compress amazingly well. Q7. How often does the agent send? Ans: The agent will attempt to send all known beacons whenever there is network activity or after about 5 minutes has expired since the last failed attempt.
Symptoms When trying to create a new analytics search the dashboard or ui becomes white even when analytics licence is correct. This issue is seen when Analytics is enabled for the first time an... See more...
Symptoms When trying to create a new analytics search the dashboard or ui becomes white even when analytics licence is correct. This issue is seen when Analytics is enabled for the first time and there is some configuration issue due to which the desired analytics data is not seen. Diagnosis When checked inside the analytics-agent.log following message is seen: analytics-agent / Connection to [http://<events-service-host>:<events-service-port>/v1]: (unhealthy) The supplied auth information is incorrect or not authorized. When checked inside server.log following message is seen: com.singularity.ee.controller.ui.services.analytics.AAnalyticsUiService|_ThreadID=83;_ThreadName=Thread-5;|Could not find account [Global_account_name] via lookup Events Service logs report healthy. Please collect related components logs as mentioned above, i.e. events service, controller server.log and analytics-agent.log. Once the logs are collected, please try to isolate the issue, for example in this particular case it was due to wrong auth information being used between components. Solution To resolve this issue, please verify the auth info being passed by following below steps: 1) Inside analytics-agent.properties file, please modify the following property to the provided value: http.event.endpoint=http://<events-service-host>:<events-service-port> 2) Verify that the http.event.accessKey is correct from the License page account access key value. 3) As with Analytics its recommended to use the standalone events service, so make sure that the following key values should match:    a) ad.accountmanager.key.controller inside events-service-api-store.properties.    b) appdynamics.analytics.server.store.key inside controller administrator console(admin.jsp). 4) Verify that the value for appdynamics.analytics.server.store.url inside controller administrator console should point to exact events service: http://<events-service-host>:<events-service-port> Once all the above configuration is modified, please make a restart of events service and analytics agent to bring the above config changes into effect. After that when everything will be up and running the issue should not be seen anymore.
When enabling EUM for an application from the EUM configuration window, the error "The server encountered an internal error () that prevented it from fulfilling this request" appears In this art... See more...
When enabling EUM for an application from the EUM configuration window, the error "The server encountered an internal error () that prevented it from fulfilling this request" appears In this article... Symptoms: An error message appears in the EUM configuration window Diagnosis: Analyze the Controller machine's server.log  Solution: Import a new EUM certificate to the Controller trust store Symptoms: Error in the EUM configuration window When trying to enable EUM for an application from the EUM configuration window, we get the following error: Error The server encountered an internal error () that prevented it from fulfilling this request. Diagnosis: Analyze the Controller machine's server.log  After analyzing the server.log file on the Controller machine, we can see the exceptions below while connecting to EUM: Communication failure with service (https://agg.eum-appdynamics.com/v2/account/xxxxxxxxxxxxxxxxxxxx/license/terms): javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target. The error above occurs when the Controller trust store does not have the EUM client certificate, resulting in a failed validation. Solution: Import a new EUM certificate to the Controller trust store Follow the steps below to download the EUM certificate and import it to the Controller trust store: Access the following URL in the browser:  https://agg.eum-appdynamics.com/eumaggregator/get-version For an on-premise EUM Server, access the below URL: https://<EUMHost>:7002/eumaggregator/get-version If you are using any alternate port for HTTPS, change the value accordingly. Click on the lock icon on the URL bar to display the certificate details. Export the certificate for the EUM Server and transfer it to the Controller host. To export the certificate from the command line, run the following command to export the certificate into a file: keytool -J-Dhttps.proxyHost=<proxy_host> -J-Dhttps.proxyPort=<proxy_port> -printcert -rfc -sslserver <eum_host>:<eum_ssl_port> 2>/dev/null > certs.pem​ If you are not using a proxy server to connect from the controller to the EUM server, you can avoid the parameters for the proxy host and ports: -J-Dhttps.proxyHost=<proxy_host> -J-Dhttps.proxyPort=<proxy_port>​ The certs.pem file generated using this command may contain multiple certs presented by the server (server cert, proxy cert, etc). Save the individual certificate into a separate file, such as file1.pem, file2.pem, etc. The individual certs will be enclosed as such: -----BEGIN CERTIFICATE----- ..... -----END CERTIFICATE-----​ Navigate to the <AppDynamicsHome>/appserver/glassfish/domains/domain1/config directory. Use the following key tool command to import the certificate to the Controller trust store: $JAVA_HOME/bin/keytool -import -trustcacerts -alias <alias> -file <certificate file> -keystore cacerts.jks​ Run the command above for each of the certificates you saved in step 3. Restart the app server.
Symptoms If the Events Service is not stopped gracefully before restarting the system, or if the Events Service process is killed abruptly, the Events Service start will fail. The Events Service lo... See more...
Symptoms If the Events Service is not stopped gracefully before restarting the system, or if the Events Service process is killed abruptly, the Events Service start will fail. The Events Service log will have the below entry: ERROR] [main] [c.a.common.framework.AbstractApp] Severe error occurred while starting application [events-service-api-store]. Shutdown procedure will commence soon java.lang.RuntimeException: Unable to create file [/opt/AppDynamics/Controller/events_service/bin/../events-service-api-store.id] to store the process id because it already exists. Please stop any currently running process and delete the process id file   Diagnosis A graceful shutdown of the Events Service results in clearing the events-service-api-store.id . At the next startup, this file is created again by the Events Service. When the Events Service is shut down abruptly, this file is not removed so, during the next startup, the Events Service will not be able to create this file. This causes the start-up failure for the Events Service.   Solution Remove the events-service-api-store.id file located in <EventsServiceHome> directory and then start the Events Service process.   To avoid this scenario, always perform a graceful shutdown of the Events Service.
Symptoms We have come across an issue where the Database was not reporting the events data but was throwing an error message as "We aren't able to load data from the event service" as shown in the ... See more...
Symptoms We have come across an issue where the Database was not reporting the events data but was throwing an error message as "We aren't able to load data from the event service" as shown in the below image. The controller's server.log and the error message in the above image indicate that the event-service process might have been stopped/died and hence need a restart of the same. Snippet from the controller's server.log: 0500|SEVERE|glassfish3.1.2|com.singularity.ee.controller.beans.ExceptionHandlingInterceptor|_ThreadID=120;_ThreadName=Thread-5;|Encountered runtime exception com.appdynamics.analytics.shared.rest.exceptions.ClientException: Could not execute request to http://localhost:9080/v2/events/dbmon-wait-time atcom.appdynamics.analytics.shared.rest.client.utils.GenericHttpRequestBuilder.getResponse(GenericHttpRequestBuilder.java:224)   atcom.appdynamics.analytics.shared.rest.client.utils.GenericHttpRequestBuilder.executeAndReturnRawResponseString(GenericHttpRequestBuilder.java:238) atcom.appdynamics.analytics.shared.rest.client.eventservice.DefaultEventServiceClient.registerEventType(DefaultEventServiceClient.java:132) Restart of the event Service didn't help as the issue persisted with the same SEVERE message in the logs as shown in the snippet above. <Controller_Install_Dir>/bin/controller.sh start-events-service Diagnosis As part of the troubleshooting check list,we carried out the below stuff in sequence to find out the root cause of the issue. 1. We could infer that the process id existed from the output of the ps -ef | grep -i event-service command 52676 11537 1 0 03:58 pts/3 00:00:11 /opt/AppDynamics/Controller/jre/bin/java -Xmx6144m -Xms6144m -Xss256k -Djava.net.preferIPv4Stack=true -Dfile.encoding=UTF-8 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintClassHistogram -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -XX:+PrintPromotionFailure -verbose:gc -XX:GCLogFileSize=256m -XX:NumberOfGCLogFiles=4 -XX:+UseGCLogFileRotation -Xloggc:/opt/AppDynamics/Controller/events_service/bin/../logs/events-service-api-store-gc.log -DAPPLICATION_HOME=/opt/AppDynamics/Controller/events_service/bin/.. -classpath /opt/AppDynamics/Controller/events_service/bin/../lib/* com.appdynamics.analytics.processor.AnalyticsService -p /opt/AppDynamics/Controller/events_service/conf/events-service-api-store.properties -y /opt/AppDynamics/Controller/events_service/bin/../conf/events-service-api-store.yml 52676 19976 9765 0 04:38 pts/3 00:00:00 grep events-service 2. We then checked the health state of the event-service but it didn't respond to any request. curl http://<event-service-host>:9081/healthcheck?pretty=true 3. Then we check if the host and port of the event-service is binded correctly using the netstat command but we didn't see the LISTEN state for 9080 port as the command didn't return anything. netstat -anp | grep 9080 4. We then realized that the process might have been hung/unresponsvie during the startup and hence captured five sets of thread dumps to find out the thread that's hung and where exactly it was hung. We used kill -3 52676 (kill -3 <PID>) to capture the thread dumps and *NOTE* that java thread dump output goes to "stdout"  and hence it will be written to the nohup.out stdout file of the event-service. 5. Upon analyzing all the thread dumps,we found out that the main thread was hung as seen below as it was stuck at the native layer(sun.nio.fs.UnixNativeDispatcher.stat0(Native Method)) and was not progressing at all in the successive thread dumps.  Stack Trace of the Hung Thread: "main" #1 prio=5 os_prio=0 tid=0x00007fdb84011000 nid=0xe280 runnable [0x00007fdb88425000] java.lang.Thread.State: RUNNABLE at sun.nio.fs.UnixNativeDispatcher.stat0(Native Method) at sun.nio.fs.UnixNativeDispatcher.stat(UnixNativeDispatcher.java:286) at sun.nio.fs.UnixFileAttributes.get(UnixFileAttributes.java:70) at sun.nio.fs.UnixFileStore.devFor(UnixFileStore.java:55) at sun.nio.fs.UnixFileStore.(UnixFileStore.java:70) at sun.nio.fs.LinuxFileStore.(LinuxFileStore.java:48) at sun.nio.fs.LinuxFileSystem.getFileStore(LinuxFileSystem.java:112) at sun.nio.fs.UnixFileSystem$FileStoreIterator.readNext(UnixFileSystem.java:213) at sun.nio.fs.UnixFileSystem$FileStoreIterator.hasNext(UnixFileSystem.java:224) - locked <0x000000065610cb50> (a sun.nio.fs.UnixFileSystem$FileStoreIterator) at org.elasticsearch.env.NodeEnvironment.getFileStore(NodeEnvironment.java:267) at org.elasticsearch.env.NodeEnvironment.access$000(NodeEnvironment.java:62) at org.elasticsearch.env.NodeEnvironment$NodePath.(NodeEnvironment.java:75) at org.elasticsearch.env.NodeEnvironment.(NodeEnvironment.java:140) at org.elasticsearch.node.internal.InternalNode.(InternalNode.java:165) at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:159) at org.elasticsearch.node.NodeBuilder.node(NodeBuilder.java:166) at com.appdynamics.analytics.processor.elasticsearch.node.single.ElasticSearchSingleNode.(ElasticSearchSingleNode.java:49) at com.appdynamics.analytics.processor.elasticsearch.node.single.ElasticSearchSingleNode$$FastClassByGuice$$7b182632.newInstance() at com.google.inject.internal.cglib.reflect.$FastConstructor.newInstance(FastConstructor.java:40) at com.google.inject.internal.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:60) at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:85) at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254) at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46) at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031) at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40) at com.google.inject.Scopes$1$1.get(Scopes.java:65) - locked <0x00000006534a90f0> (a java.lang.Class for com.google.inject.internal.InternalInjectorCreator) at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40) at com.google.inject.internal.SingleFieldInjector.inject(SingleFieldInjector.java:53) at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110) at com.google.inject.internal.MembersInjectorImpl$1.call(MembersInjectorImpl.java:75) at com.google.inject.internal.MembersInjectorImpl$1.call(MembersInjectorImpl.java:73) at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1024) at com.google.inject.internal.MembersInjectorImpl.injectAndNotify(MembersInjectorImpl.java:73) at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:60) at com.google.inject.internal.InjectorImpl.injectMembers(InjectorImpl.java:944) at com.appdynamics.common.framework.Loaders.internalPrepareAndPreStart(Loaders.java:181) at com.appdynamics.common.framework.Loaders.loadAndInitializeModules(Loaders.java:127) at com.appdynamics.common.framework.AbstractApp.run(AbstractApp.java:311) at com.appdynamics.common.framework.AbstractApp.run(AbstractApp.java:59) at io.dropwizard.cli.EnvironmentCommand.run(EnvironmentCommand.java:42) at io.dropwizard.cli.ConfiguredCommand.run(ConfiguredCommand.java:76) at io.dropwizard.cli.Cli.run(Cli.java:70) at io.dropwizard.Application.run(Application.java:72) at com.appdynamics.common.framework.AbstractApp.callRunServer(AbstractApp.java:267) at com.appdynamics.common.framework.AbstractApp.runUsingFile(AbstractApp.java:261) at com.appdynamics.common.framework.AbstractApp.runUsingTemplate(AbstractApp.java:248) at com.appdynamics.common.framework.AbstractApp.runUsingTemplate(AbstractApp.java:167) at com.appdynamics.analytics.processor.AnalyticsService.main(AnalyticsService.java:71) 6. Further reading of the stack trace clearly indicates some kind of a file system issue and hence we checked for any kind of NFS Mount hung issue with the OS admin. 7. OS admin confirm that the NFS Mount was indeed hung which was caused due to the server got migrated to the new host which caused the NFS Mount hung. 8. Unmounting and remounting with the correct mount point should resolve the hung NFS mount issue. Solution The solution was to fix the NFS Mount hung. The event-service process then started up just fine and the DB-Mon started reflecting the events data correctly.
The following document enlists the steps to migrate Data from the Controller in-built Events Service (Elastic Search) to a Clustered Events Service, with minimal downtime and no data loss Assumpti... See more...
The following document enlists the steps to migrate Data from the Controller in-built Events Service (Elastic Search) to a Clustered Events Service, with minimal downtime and no data loss Assumption If you are migrating a 4.1 cluster, then first convert the single node  Required tools: curl or equivalent utility (eg. Postman) Node 1: Controller in-built Events Service Node 2, 3, 4,.....n: New n node cluster  Install a new cluster For 4.1 Setup a cluster manually as described here For 4.2 use the Platform Admin utility to install a cluster as described here Shutdown all the nodes Once a n node cluster is successfully created change the following in the NODE1's conf/events-service-api-store.properties to match the rest of the nodes(NODE2-n) parameters ad.es.node.minimum_master_nodes ad.es.event.index.shards ad.es.event.index.replicas ad.es.metadata.replicas ad.es.rolling.maxShardsPerIndex Set the following in the NODE2-n to the same values as NODE1 ad.accountmanager.key.eum ad.accountmanager.key.controller ad.accountmanager.key.ops Set the following on all the nodes - NODE1-n ad.es.node.unicast.hosts=NODE2:9300,NODE3:9300,...NODEn:9300,NODE1:9300 Choose 2 more master nodes apart from NODE1... say NODE2 and NODE3, and set the following on the rest of the machines (4-n) which will act as slaves ad.es.node.master=false Empty the data directory on all the nodes NODE2-n Start all the nodes NODE1-n This will create a n node cluster and replicate the data equally onto the nodes of the cluster Check for sanity: http://NODE1:9200/_cat/shards?v You should see all the shards in the STARTED state http://NODE1:9200/_cat/indices?v You should see all the indices in green and open state Once the data is replicated run the following on any of the master nodes (ones that are masters) curl -XPUT localhost:9200/_cluster/settings -d '{ "transient" :{ "cluster.routing.allocation.exclude._ip" : "<NODE1_IPADDRESS>" } }' This should be successfully executed. After a while of running this command you should see the following when run from any of the master nodes: curl http://localhost:9200/_cat/allocation?v shards disk.used disk.avail disk.total disk.percent host ip node  0 5.2gb 4.7gb 10gb 52 linux-629i.site 172.16.87.141 NODE1 Here the number of shards for NODE1 should be 0. This would mean that all the data from the NODE1 has been now moved to the other nodes Shutdown node NODE1-n Reconfigure the cluster to be a n-1 node cluster. Set the following on nodes NODE2-n ad.es.node.unicast.hosts=NODE2:9300,NODE3:9300,....,NODEn:9300 Reconfigure NODE4 to have: ad.es.node.master=true Start the nodes NODE2-n Check the status of the cluster again: http://NODE2:9200/_cat/shards?v You should see all the shards in the STARTED state http://NODE2:9200/_cat/indices?v You should see all the indices in green and open state Reconfigure EUM config to point to this cluster Change the following in the EUM's eum.properties file to point to the new cluster. eg: analytics.serverScheme=http analytics.serverHost=172.16.87.134 analytics.port=180   Reconfigure Controller Log on to admin.jsp and change the following keys to point to the new cluster. eg: appdynamics.analytics.local.store.url=http://172.16.87.134:180 appdynamics.analytics.server.store.url=http://172.16.87.134:180 eum.es.host=http://172.16.87.134:180
What do I need to know about updating a MAC address? AppDynamics licenses for on-premises Controllers are tied to the MAC address of the machine on which the Controller is installed. In rare cases... See more...
What do I need to know about updating a MAC address? AppDynamics licenses for on-premises Controllers are tied to the MAC address of the machine on which the Controller is installed. In rare cases, you may need to move the Controller and its license to a new machine. In such an event, you can update the MAC address associated with your license. Table of Contents Who manages the MAC address and where do they find it? How do I work with MAC address change limits? Additional Resources Who manages the MAC address and where do they find it? All license users for your account’s license will be able to view, download, and edit the MAC Address for any active licenses. The MAC Address is visible to them on the Company Overview screen. Under the Actions column, click the license’s corresponding eye icon. The On-Premises License dialog will appear.   Click the Edit link next to the MAC address. The Edit Mac Address dialog will open. On the Enter MAC Address dialogue box, enter the new MAC address, then click the Save button. Once you’ve saved the new MAC address, download the new license file and apply it to the Controller. How do I work with MAC address change limits? It is important to note that the MAC Address can be changed up to 12 times within a 12-month period. If you need to change it more often than that, you’ll need to contact the AppDynamics Licensing Team by emailing licensing-help@appdynamics.com. Additional Resources Apply or Update a License File, under Update MAC Address Accounts Overview, under License Admin, Subscriptions
Sample dashboard with key metrics for a Business Transaction (BT) This sample dashboard provides key metrics for a Business Transaction, including: Calls Response Time Errors, and Slow V... See more...
Sample dashboard with key metrics for a Business Transaction (BT) This sample dashboard provides key metrics for a Business Transaction, including: Calls Response Time Errors, and Slow Very Slow and Stall count. This dashboard is well suited to running as a scheduled report. Though the default time range is one day, that can be modified. A sample generated PDF is attached. How do I use this sample dashboard? To use this dashboard, edit the attached JSON file in a text editor. Replace "Homepage" with your target Business Transaction name (17 occurrences) Replace "SERVLET" with your target Business Transaction type (e.g., "WEB", "WEB_SERVICE", "ASP_DOTNET", etc.) (6 occurrences) Replace "ECommerce-Services" with the Tier that includes the target Business Transaction (24 occurrences) Replace "ECommerce" with your Application name (30 occurrences) How do I install the modified dashboard? Install the modified dashboard configuration on your Controller. Log in to your Controller UI (4.1 and higher). Navigate to the Custom Dashboards list screen. Import the JSON file. If there are any errors, rebind the metrics which correspond for your particular application. To do this, you edit each displayed widget in the dashboard, select your application and then confirm or select the metric for that display.  For detailed instructions for working with custom dashboard widgets, see Create and Manage Custom Dashboards and Templates.
This sample Absolute Layout dashboard provides key metrics for an Application, including Response Time, Errors, and Calls.   The key metrics are shown as a chart with baseline, and the aggregat... See more...
This sample Absolute Layout dashboard provides key metrics for an Application, including Response Time, Errors, and Calls.   The key metrics are shown as a chart with baseline, and the aggregate value is shown as a number. Status lights indicate the Response Time and Error health of the target application. To use this dashboard, edit the attached JSON file in a Text editor. Replace "ECommerce" with your Application name (21 occurences). Change the URL of the logo in the top left corner to suit your preference. Install the modifed dashboard configuration on your Controller: Login to your Controller UI (4.1 and higher). Navigate to the Custom Dashboards list screen. Import the JSON file. If there are any errors, rebind the metrics which correspond for your particular application. To do this, you edit each displayed widget in the dashboard, select your application and then confirm or select the metric for that display.  If you need detailed instructions for working with custom dashboard widgets, please visit docs.appdynamics.com and view Create Custom Dashboards.
What abilities come with the License Admin role? A license admin is a special type of user on AppDynamics.com. These users are associated to one or more specific licenses and, with this association... See more...
What abilities come with the License Admin role? A license admin is a special type of user on AppDynamics.com. These users are associated to one or more specific licenses and, with this association, gain the ability to learn about the usage of those licenses. Once designated as a License Admin, these users can also register other new users to AppDynamics.com, conferring to them access to licenses from those that they themselves have access to. Table of Contents How do I become a license admin?  How do I delegate the license admin role to other users of my company?  How do I grant access to an existing user?  How do I add a new user with license access?  Related resources  How do I become a License Admin? There are two ways to become a License Admin:  Through the sales process with AppDynamics. Another license admin of your company grants you the role. Whenever licenses are purchased from AppDynamics, contacts gathering is part of the sales and provisioning process. During provisioning, AppDynamics registers the first license admins. Once an individual user is granted this right, the (new) License Admin may also delegate this capability to others using the Account Management Portal.  How do I delegate the License Admin role to other users at my company? As a License Admin, you have access to the user management functions in the Account Management Portal. To access these functions, sign in using your AppDynamics.com user account at accounts.appdynamics.com/users. Please remember that, though it may share the same email, this is a distinct account from your Controller account, should you have one.   If you are already logged into AppDynamics.com, you can find the user management link on the left-hand navigation pane of the company context pages. Click your name in the upper right-hand corner and choose Subscriptions from the drop-down. You'll see a listing of the company subscriptions to which you're currently assigned as License Admin. Choose User Management from the left-hand navigation pane. You are able to assign this role to others in one of two ways: Choose an existing user and grant them access to one or more of your licenses Add a new user with access to one or more of your licenses How can I grant access to an existing user? The user management page displays all of your company's users. From this list, you can browse or search for users. Once you have found the user to whom you want to grant access to the license, select them and assign the license as follows: Click the user in the row to select them. Choose the “Edit User” function (pencil) from the action bar at the top. Click the “License Admin” checkbox to enable it. Click the drop-down to display all the licenses that you may assign. Choose 1 or more licenses by clicking the checkbox of each. Click the Save button. The user will be granted access to the selected license(s). They will receive an email informing them of their access. How can I add a new user with license access? From the user management list, you can add a new user to the system and assign their license access. From the User Management page, click the “+” (Add User) button in the action bar at the top of the listing. Complete the basic information about this user. You are required to provide the user’s email address.  The remaining information is optional and can be changed later if needed.  Don’t worry if you don’t know the user’s first or last name: each individual will be required to provide that information when they complete their account profile.   Choose one or more licenses to assign to the new user. Notice that the License Admin role is automatically checked.  You must click the dropdown to select from among available licenses. Click the Save button to finish adding the user to your company’s account.  The new user will be added to your company user list page with a status of Pending. The individual will receive a welcome email with a link that will enable them to complete their profile and password. When they complete this step, their account status will change to Active.  See the How do I manage accounts.appdynamics.com users as an Admin? article to learn more about the capabilities of AppDynamics Accounts user management.   Related resources Documentation: Cisco AppDynamics SaaS User Management  Knowledge Base: How do I manage Accounts Management Portal users as an Admin?
Where can I find information on AppDynamics’ integration with VMware Tanzu (formerly PCF)? The AppDynamics integration with VMware Tanzu Application Service (formerly Pivotal Cloud Foundry or PCF)... See more...
Where can I find information on AppDynamics’ integration with VMware Tanzu (formerly PCF)? The AppDynamics integration with VMware Tanzu Application Service (formerly Pivotal Cloud Foundry or PCF) lets you easily deploy AppDynamics-monitored applications on the VMware Tanzu platform, and gather performance and infrastructure metrics.  The Application Monitoring for VMware Tanzu tile is at the center of the AppDynamics integration with VMware Tanzu. With this tile, you have the full benefit of AppDynamics APM for your services deployed in any VMware Tanzu platform, including the ability to correlate these services with business data in real-time.  In this article... Integration tile for AppDynamics Platform Monitoring for VMware Tanzu Resources Installation and end-to-end workflows Release Notes and FAQs Recommended reading Hands-on training: recorded webinars Integration tile for AppDynamics Application Monitoring for VMware Tanzu For performance monitoring of applications running on Cloud Foundry: APM is provided through AppDynamics support inside the standard Cloud Foundry buildpacks and Service Broker deployed through the tile An Extension Buildpack is provided to help instrument Java and .NET HWC applications For detailed information on the integration and Application Monitoring for VMware Tanzu, visit these resources: AppDynamics Application Performance Monitoring for VMWare Tanzu AppDynamics Service Broker AppDynamics Extension Buildpack   How does it work? To simplify APM setup, this tile delivers a service broker to support an AppDynamics marketplace service. The Controller parameters are configured in the tile and automatically published to the marketplace when the tile is installed. Back to Contents   Resources Installation and end-to-end workflows VMware Tanzu integration tile for application monitoring Installation and configuration resources: Follow the steps outlined in the resources below to install and configure Application Monitoring (APM): Installing and Configuring AppDynamics Using AppDynamics This page provides more information on end-to-end workflows using sample Java, .NET, Python, and PHP applications,  AppDynamics Application Performance Monitoring for VMware Tanzu Workflow NOTE | Upgrades from AppDynamics v1.x are not supported. If you previously installed AppDynamics v1.x, uninstall it and install the latest version.  Back to Contents Release Notes and FAQs To stay up-to-date on the latest functionality in the AppDynamics/VMware Tanzu integration tiles, see the following Release Notes, as well as FAQs about using AppDynamics with Pivotal VMWare Tanzu in the Pivotal documentation: Release Notes for AppDynamics Application Performance Monitoring for VMware Tanzu tile  See AppDynamics VMware Tanzu Tile FAQs for troubleshooting tips and answers to common questions about the AppDynamics tile, Service Broker and buildpacks, and other topics.   Recommended Reading For overviews of VMwareApplication Performance and platform monitoring, examples of how the Buildpack and Service Broker work, an outline of the Extension Buildpack, and a list of supported environments, check out the blog posts below: The AppD Approach: Pivotal Cloud Foundry Performance Monitoring AppDynamics Enhances Pivotal Cloud Foundry Performance Monitoring with New Infrastructure View Blue-Green Deployment Strategies for PCF Microservices   Hands-on training: recorded webinars Here on Community, we host the following AppDynamics webinar recordings (original airdates 2018) about using Pivotal integration. NOTE | The Platform Monitoring tile mentioned in these sessions has been deprecated and is no longer available. AppDynamics recommends considering the BOSH Prometheus Firehose Exporter as a replacement. WEBINAR | Technical Session and contents  Monitoring Pivotal Cloud Foundry Applications and Infrastructure Learn how to monitor polyglot distributed applications deployed to VMware Tanzu in this hands-on training geared towards operations teams and developers. Guidance through the complete application lifecycle, including: How to deploy and configure APM solutions using Cloud Foundry buildpacks for Java, .NET, node.js, Python, and other languages How to install and configure the AppDynamics Service Broker tile How to monitor VMware Tanzuand BOSH infrastructure health and availability using the new monitoring integration recently introduced for Pivotal 2.x Along with the recording, read a transcript of the Q&A, and access additional resources. AppD and PCF Updates on .NET Core, Platform Monitoring for Multiple Foundations, Multi-Buildpacks and More This session covers: Our multi-buildpack support, plus a demo of how to deploy and monitor .NET Agent for Linux using the multi-buildpack extension Back to Contents
ICEfaces is an open source Software development kit that extends JavaServer Faces (JSF) by employing Ajax. It is used to construct rich Internet applications (RIA) using the Java programming lang... See more...
ICEfaces is an open source Software development kit that extends JavaServer Faces (JSF) by employing Ajax. It is used to construct rich Internet applications (RIA) using the Java programming language. With ICEfaces, the coding for interaction and Ajax on the client side is programmed in Java, rather than in JavaScript, or with plug-ins. To create a Business Transaction match rule in AppDynamics that effectively matches ICEfaces entry points, follow these steps: In the Transaction Detection UI tab in the Controller UI (Configuration > Instrumentation > Transaction Detection), create a new custom match rule. Choose Servlet as the entry point type for the rule. For the Transaction Match Criteria, match by URI containing the word faces. In the Split Trasactions Using Payload tab, split by POJO Method Call. Split based on the second parameter (at index=1) of the com.sun.faces.application.ViewHandlerImpl.renderView() method, getViewId(). For example: