All TKB Articles in Learn Splunk

Learn Splunk

All TKB Articles in Learn Splunk

A Step-by-Step Guide to run the standalone on-premise controller as a service in the Linux environment. When you run a standalone on-premise controller manually, you can follow the steps described ... See more...
A Step-by-Step Guide to run the standalone on-premise controller as a service in the Linux environment. When you run a standalone on-premise controller manually, you can follow the steps described in the documentation below: https://docs.appdynamics.com/appd/onprem/24.x/latest/en/controller-deployment/administer-the-controller/start-or-stop-the-controller However, there might be situations where you need to run the standalone on-premise controller as a service in a Linux environment. If so, you can follow the steps below. Change the user to root sudo -i​ Install the library below (optional) apt install libxml2-utils -y​ OR yum install libxml2 -y​ Move to the directory below: cd /opt/appdynamics/platform/product/controller/controller-ha​ Set up the controller db password and validate it ./set_mysql_password_file.sh -p <controller-db-password>​ Output results: Checking if db credential is valid...​ Move to the directory below cd /opt/appdynamics/platform/product/controller/controller-ha/init​ Run the script below ./install-init.sh -s​ Output results: update-rc.d will be used for installing init installed /etc/sudoers.d/appdynamics installing /etc/init.d/appdcontroller-db installing /etc/default/appdcontroller-db installing /etc/init.d/appdcontroller installing /etc/default/appdcontroller​ Run the commands below to enable the newly created service: systemctl enable appdcontroller systemctl enable appdcontroller-db systemctl restart appdcontroller systemctl restart appdcontroller-db systemctl status appdcontroller systemctl status appdcontroller-db​ Additionally, you might create your own unit file with the start/stop commands to run the standalone on-premise controller as a service in a Linux environment without using our script.
We're excited to announce that AppDynamics is transitioning our Support case handling system to Cisco Support Case Manager (SCM), enhancing your support experience with a standardized approach across... See more...
We're excited to announce that AppDynamics is transitioning our Support case handling system to Cisco Support Case Manager (SCM), enhancing your support experience with a standardized approach across all Cisco products. This migration is scheduled to take place on June 14th. As the transition date approaches, you will notice banners appearing in both the AppDynamics Admin Portal and on our help website (www.appdynamics.com/support). These banner notifications will keep you informed about the change and notify you once the transition has been completed.  Access 1-Year Historical AppDynamics Support Case Data On October 3, 2024, 1-year historical AppDynamics support case data will be accessible through Cisco Support Case Manager (SCM). This update will allow users to view all closed cases from Zendesk, dating between June 14, 2023, and June 14, 2024, directly in SCM. Please be aware that access to certain cases may be restricted to the individual who originally opened the case. We apologize for any inconvenience this may cause and want to assure you that Cisco is actively working to address these limitations. Temporary Work-Around  On June 14, AppDynamics transitioned to Cisco Support Case Manager (SCM) for case creation and management. Since the migration, we have become aware that some customers are experiencing difficulties accessing SCM to create/view cases. We sincerely apologize for any inconvenience this may have caused and want to assure you that Cisco is working diligently to resolve these issues as quickly as possible.   As a temporary workaround, beginning Saturday, August 17th, users who have encountered errors when attempting to open cases will be able to bypass these errors and proceed with case creation. Please note that for cases created using this workaround, only the user who initiates the case will have access to view it in SCM. If you need to share the visibility of these cases with others in your organization, please ensure that they are included in the CC list when creating the case. Please note that visibility is restricted to email communications only for data privacy and security.  If you continue to experience issues with SCM, or if you have any other concerns, please do not hesitate to contact us at appd-support@cisco.com for further assistance.  What does this mean for you?  AppDynamics will notify you once your profile and support cases have been successfully migrated, allowing you to seamlessly access your support cases in SCM. Until the migration is complete, you will continue to have access to your cases through the current AppDynamics case-handling tool. Access to the new SCM platform requires that your profile is migrated to the "Cisco User Identity," a process that will be automatically handled for you. For more information on the "Cisco User Identity" changes, please refer to the communication sent via email and published on the AppDynamics Community located here.   Key points to remember:  You will still be able to open cases from the portal and website, although the interface will undergo a visual update You will need a Cisco.com account to access SCM  Your open cases and up to 1-year of closed cases will be seamlessly migrated to the new system Additional Resources How do I open a case with AppDynamics Support? How do I manage my support cases?
Table of contents Search for a case Updating a case Upload an attachment to a case How can I request to close a case? Is there an easy way to manage my cases? Do you have a bot or assista... See more...
Table of contents Search for a case Updating a case Upload an attachment to a case How can I request to close a case? Is there an easy way to manage my cases? Do you have a bot or assistant to help manage cases?  Case Satisfaction Additional Resources Search for a case  To view an open or migrated case in SCM, navigate to the “Create and Manage Support cases”- view. There you type in the Case ID number (either new or old case ID) in the Search-field  and press enter (Figure 1) Figure 1 Updating a case   Go to the SCM start page, where under “Cases”, you pick “My Cases” (Figure 2) and select the case that needs updating. Here you edit your case and make sure to save the changes before exiting.  Figure 2 Upload an attachment to a case    If you need to upload and attach a file to a case, you can do so when opening a “new case”, or by going to an “existing case”. When opening a new case, you’re prompted to upload an attachment when the case has been submitted. For an existing case, navigate to the “My Cases” view as seen in Figure 6. In the right corner press the “Add File” (Figure 3) button, upload the file and save.  Figure 3 How can I request to close a case? You can close a case yourself in two different ways:  1. Manually Go to 'CASE SUMMARY' Edit Describe how the case was resolved (optional) Case status updates to "Close Pending / Customer Requested Closure" 2. With the Support Assistant From the Support Assistant type 'close the case (insert case number)' The Support Team will close the case How to reopen a closed case and validity You can reopen a closed case in two different ways: 1. Manually From Support Case Manager Check closed cases Apply filters Select the case Click reopen on the top right corner 2. With the Support Assistant  From the Support Assistant type 'reopen the case (insert case number)' A case can be reopened only for two weeks after the close date If a case is outside the two-week window, it is recommended to open a new case Case Satisfaction  After migrating to Cisco SCM, at case closure, you will be provided an industry standard 10-point scale and asked to choose a value to reflect satisfaction on the support of the case. (Figure 4) Figure 4 Is there an easy way to manage my cases? Do you have a bot or assistant to help manage cases?  Yes, we have a Support Assistant bot! In the bot's own words: Hello! I can help you get case, bug, RMA details and connect with Cisco TAC. Simply enter the case number as shown in the examples below and get the latest case summary. 612345678 - Cisco TAC case 00123456 - Duo support case S CS 0001234 - ThousandEyes support case 1234567 - Umbrella support case You can converse with me in English language or use commands. Currently, I can't open new cases or answer technical questions. • my cases • what is the status of (case number or bug number or rma number or bems number) You can ask me to perform the following tasks: • connect with engineer (case number) • create a virtual space (case number) • create an internal space • request an update for (case number) • update the case (case number) • add participant (email address) • raise severity (case number) • requeue (case number) • escalate (case number) • close the case (case number) • reopen the case (case number) • update case summary (case number) • show tac dm schedule • show cap dm schedule You can mark a case as a favorite and get automatic notifications when the case summary (Problem Description, Current Status, and Action Plan) gets updated: • favorite (case number) • list favorites • status favorites You can ask me to connect to support teams: • connect to duo I can help you manage cases that are opened from Cisco.com Support Case Manager. Currently, I can't open new cases or answer technical questions. Type "/list commands" to get a list of command requests and find details of supported features using the documentation and demo videos. Additional Resources How do I open a case with AppDynamics Support? AppDynamics Support migration to Cisco CSM
This is a new version of the licensing model that consumes licenses based on vCPUs. Available for both On-Premes and SaaS. Utilization is 1 License unit per CPU Core. It does not matter how man... See more...
This is a new version of the licensing model that consumes licenses based on vCPUs. Available for both On-Premes and SaaS. Utilization is 1 License unit per CPU Core. It does not matter how many agents are running on a server, how many applications/containers these agents are monitoring, how much data these agents are collecting/reporting, and how many transactions these agents are creating. The Licenses will be consumed based on the number of CPUs available on the Server/Host. The Basics What are the minimum versions required for controller and apm/database/server agents to properly count vCPU? These are the required AppD agent versions needed to make the customer fully IBL compliant. Controller: v21.2+ (for the database agent to default to 4vCPU instead of 12vCPU, v23.8+(csaas)/v23.7+(on-prem)) Machine Agent: 20.12+ .NET Agent: 20.12+ DB Agent: Min: 21.2.0 (recommended latest 21.4.0) (for MySQL/PostgreSQL RDS databases IBL support, the minimum is 22.6.0) For accurate license counting, the machine agent needs to be deployed, or hardware monitoring needs to be enabled in case of database monitoring. The machine agent version should be greater than 20.12. The machine agent will calculate the number of CPUs available on the monitoring Server/Host. How to migrate from Agent Based Licensing to Infrastructure Based Licensing? Migration from Agent Based Licensing to Infrastructure Based Licensing is handled by licensing-help@appdynamics.com On conversion: All license rules are maxed out to account value by default keeping app/server scope restrictions as is For example: LicenseRuleA to Z with 2 apm units each, and accountLevelApm=100 units on conversion will be set to LicenseRuleA-Z 400units "each" and accountLevelHBL=400 units on conversion. 400 is just a random number here, final conversion is made by the sales team What is the definition of vCPU and how do I verify if it's correct? In the case of a Physical Machine, the number of logical cores or processors is considered to be the vCPU count For planning purposes, you can use the following table to find out the CPU core in case the Machine agent is not available/running Technology Logical CPU Core Where is it captured? Bare metal servers Logical CPU Cores = # of processors Windows: - Task Manager - PowerShell Linux: Linux Virtual Machines Logical CPU Cores (accounting for hyperthreading) Cloud Providers Logical CPU Cores = vCPU AWS: EC2 AWS Instances Azure: Azure VMs GCP: Standard Machine Types Windows: Task Manager **insert image System Information ***insert image wmic ***insert image Linux nproc or lscpu **insert image Mac OS: sysctl -a | grep machdep.cpu.*_count OR sysctl -n hw.logicalcpu ** insert image What are packages? Each agent that consumes a license will be part of a single package. Packages will be provisioned at the account level and distributed within license rules (limited packages are supported by license rules). Packages fall under – ENTERPRISE, PREMIUM, INFRASTRUCTURE Package   What it offers? Agent list (as seen on the Connected Agents page) SAP Enterprise (SAP_ENTERPRISE) Monitor all your SAP Servers, network, and SAP Apps and get Business Insights on them using AppDynamics agents. APM Any Language: agent-type=sap-agent Network Visibility: agent-type=netviz + Everything under AppDynamics Infrastructure Monitoring Enterprise (ENTERPRISE) Monitor all your Servers, network, databases, and Apps and get Business Insights on them using AppDynamics agents Transaction Analytics: agent-type=transaction-analytics + Everything under AppDynamics Premium Premium (PREMIUM) Monitor all your Servers, network, databases, and Apps using AppDynamics agents. APM Any Language: agent-type=apm, java, dot-net, native-sdk, nodejs, php , python, golang-sdk, wmb-agent, native-web-server Network Visibility: agent-type=netviz Database Visibility: agent-type=db_agent, db_collector + Everything under AppDynamics Infrastructure Monitoring AppDynamics Infrastructure Monitoring (INFRA) Monitor all your Servers using AppDynamics agents. Server Visibility: agent-type=sim-machine-agent Machine Agent: agent-type=machine-agent Cluster Agent: agent-type=cluster-agent NET Machine Agent: agent-type=dot-net-machine-agent   For other packages please check https://docs.appdynamics.com/appd/23.x/latest/en/appdynamics-licensing/license-entitlements-and-restrictions Do I need individual packages for my account? Only transaction analytics needs ENTERPRISE as a mandate. If you do not have the ENTERPRISE package, the transaction analytics agent cannot report even if the licenses are available at the PREMIUM package All INFRA agents can be reported against PREMIUM or ENTERPRISE packages. All PREMIUM agents can be reported against the ENTERPRISE package What happens when my package level consumption is full? (Redirection Valid only if account level limits are not maxed out: If agents report against the INFRA package and the INFRA license pool is full but premium is free, new license consumption will be re-directed to PREMIUM. If agents report against PREMIUM package and PREMIUM license pool is full but ENTERPRISE is free, new license consumption will be re-directed to ENTERPRISE. For transaction analytics agents, if ENTERPRISE is full, the controller cannot switch back to unconsumed PREMIUM or unconsumed INFRA. This swapping takes place at the license rule level. Valid only if account level limits are maxed out: The above redirection will not take place if account level limits are maxed out even if a few license rule units are unconsumed. Can I force agents to report against a package? Yes, you can manage restrictions via license rules -> Server Scope / Application Scope with license rules. You will have to provide only ENTERPRISE, or ONLY PREMIUM within a single license rule. Otherwise, default re-direction is respected. 1 agent can report against 1 package only. Which packages are supported under license rules? As of 23.12.x controller version, only Premium, Enterprise, Enterprise SAP, and Infrastructure Monitoring packages are supported under License rules. The consumption and re-direction work the same as account-level switching. What happens if I do not have a machine agent to report vCPUs or I do not have hardware monitoring enabled? In case the machine agent is not running or deleted from the server or the agents are unable to find the number of CPUs, the license unit will be calculated based on the Fallback Mechanism. APM Agent - 4 CPU ~ 4 License Unit Default DB Collector – 4 OR 12 CPU ~4 OR 12 License Unit Default Why is my vCPU reported incorrectly? Inaccurate vcpu does not mean AppD is consuming the wrong licenses. It means users are not providing AppDynamics with ways to calculate licenses properly. Most common reasons: Machine Agent not installed- a. Any agent goes into fallback mode if there is no machine agent. b. database agent goes into fallback mode = 4 OR 12 default vpus even if host has1 vCPU c. APM Agent goes into fallback mode = 4 default vpus even if host has1 vCPU Managed Database (aws gcp azure) + machine agent mismatch a. Machine agent cannot be installed on managed/cloud database services b. Hardware metrics should be enabled which reports the vCPU count. If it is not enabled, default 4 OR 12 licenses are being consumed as fallback UniqueHostIdMisMatch- a. There is a mismatch between host mapping and thus each uniquehostid will be considered a different host even if they reside on the same physical machine b. Consider example - 2 java agents + 1 machine-agent on the same machine. With mismatch, 3 agents will show up as individual rows in the host-table and each would consume 4 vCPU license = 4*3 = 12vCPU towards the total License consumption, but the expectation is total as 4 and not 12. Can the two licensing models (agent based and host based) co-exist on the same license? No. A given license can only be on one of the two models. If Infrastructure Based Licensing (IBL) is enabled for a customer, can it be reverted to the legacy Agent Based Licensing (ABL) model later? No, we cannot revert. Can you share a couple of scenarios? A 4vCPU host is running with- 1 sim-agent 3 app agents and 1 netviz agent ~ Final consumption is total 4 vCPU license. A 4vCPU host is running with- 1 machine-agent 3 app agents and 1 netviz agent ~ Final consumption is total 4 vCPU license. If the machine is not updated/not reporting after the initial vcpus were reported We have timeout after which apm agents will default to fall back mechanism. If the machine adds on to the vCPUs(vertical scale) keeping the hostnames same etc. On scaling up or down, within 10min accurate vcpus would be reported into controller. A temporary spike / dip in license usage is expected if agent restarts within 5min. Database agent scenarios: if there is DB agents or DB and machine agents on a host (identified by unique host Id) then license units used ("vCPU") will be capped at 4 (4 or less, if MA reports less vCPUs for example) if there is any other agent type than DB / MA (e.g. app agent) the capping is not happening and license units used are calculated as usual in case of fallback it's 4 LUs per all DBs on the host + 4 LUs per any other agent reporting in non-fallback case (licensing knows vCPUs) the reported vCPU count is used (if both DB and MA reports vCPUs, licensing trusts MA more). Example: 2vcpu db + 3 vcpu db = 3. similarly 2vcpu db + 8vcpu machine agent = 4 max 5 vcpu db + ma = 4 max. 100vcpu db + ma = 4max 2vcpu db only = 4 100vcpu db = 4.
Step-by-Step Guide to Migrating AppDynamics Analytics Data to Harmonized Schemas Reason to migrate: This migration pertains exclusively to the analytics data captured and stored on the AppDynamic... See more...
Step-by-Step Guide to Migrating AppDynamics Analytics Data to Harmonized Schemas Reason to migrate: This migration pertains exclusively to the analytics data captured and stored on the AppDynamics controller, which has a maximum capacity of 20 schemas. To overcome this limitation, we have reengineered the approach to schema utilization, enabling additional capacity for customers to define their own schemas. Starting with agent version 24.11, this updated approach (called Harmonized) is the sole available option for new installations. However, for customers who are updating to this version, this document will walk them through the process as they should plan to migrate ASAP. Please note that any metrics migrated will begin reporting from the new location, and all historical analytics data associated with those metrics will be lost. Clean out old data Log into each SAP system (as these schemas are shared) Disable analytics via t-code /DVD/APPD_CUST, then enter Edit Mode Uncheck the analytics events API box Save changes Click the status button or run t-code /DVD/APPD_STATUS Click event service Then click Custom Analytics schema As stated before, the max number of schemas a controller can have is 20. Here is a list of the ones that are used going forward related to SAP: sap_log_data sap_workload_data sap_analytics_data_1 sap_idoc_data sap_biq_documents sap_hana_data (if a HANA DB is used) sap_bw_data (if it is a BW system) sap_pi_data (if it is a PI system) sap_custom_data_1 (custom) For the complete list use this Link, which also shows how they will be mapped Given that standard schemas will be created, it is important to ensure sufficient capacity for their inclusion. This decision will be evaluated on a case-by-case basis, but there are essentially two strategies to consider. Option 1:  Start fresh by removing all existing schemas. The necessary schemas will be recreated upon restart. Please note that this approach will result in the loss of all historical data stored on the controller, not just the analytics data that is relocated. In t-code /DVD/APPD_STATUS Click the Debug Mode Check both boxes and select the desired time duration Delete the ones you want or all of them for a fresh start Once all changes are made click the Debug Mode button again to exit debug mode (or wait for the duration to expire) Option 2: Start by just removing the schemas that are marked with the status “Not Used”. Do this by clicking the trash icon on the same row Then confirm the deletion (by clicking 'No' button) Flipping the switch: After deleting all unused schemas and verifying you have enough room go back to the t-code /DVD/APPD_CUST Enter change mode In the Analytics events API settings area, set Version to "Harmonized" and check the box for “Analytics events API is active” Verify Schemas: Once all the changes are made and running for a few hours, your end result could look like this. You can see which systems are using the different schemas by clicking the corresponding Used button Adjusting Dashboards to new schemas Any dashboard that has analytical data may have been impacted. You will need to go into each data field and modify the query like the following Replace the old (legacy) schema name with the new (harmonized) schema name in the FROM part of the query string. Add an extra WHERE condition AND sapSchema = <old schema name>. Example query change: Legacy query SELECT * FROM idocs_details WHERE SID = "ED2" Migrated query SELECT * FROM sap_idoc_data WHERE SID = "ED2" AND sapSchema = "idocs_details" Additional Resources Troubleshooting
Contents:  What is the App Agent vs Coordinator? App Agent status vs Machine Agent status Why is my app agent status 0% on IIS applications? What are the options for having 100% app ... See more...
Contents:  What is the App Agent vs Coordinator? App Agent status vs Machine Agent status Why is my app agent status 0% on IIS applications? What are the options for having 100% app agent status? What if I cannot modify IIS settings? What is the App Agent vs Coordinator? The AppDynamics.Agent.Coordinator is the orchestration on when to inject the app agent's DLLs into an application as well as collecting machine metrics (CPU, Memory, Performance Counters, etc). The Coordinator does not monitor any application on the server has this is the responsibility of the app agent. In an environment where the profiler environment variables are defined, any .NET runtime at startup will check if the application should be profiled and what profiler to inject. As part of the installation process of the MSI package, it will create the necessary profiler environment variables.  https://learn.microsoft.com/en-us/dotnet/framework/unmanaged-api/profiling/setting-up-a-profiling-environment Profiler environment variables: COR_PROFILER Full framework profiler to be injected into the application COR_ENABLE_PROFILING Boolean value on whether or not full framework profiling is enabled COR_PROFILER_PATH Path to where the full framework profiler resides CORECLR_PROFILER .NET Core profiler to be injected into the application CORECLR_ENABLE_PROFILING Boolean value on whether or not .NET Core profiling is enabled CORECLR_PROFILER_PATH Path to where the .NET Core profiler resides If the .NET application is a full framework, it will write a message to the Event Viewer's Application logs. Sample of a successful instrumentation: .NET Runtime version 4.0.30319.0 - The profiler was loaded successfully. Profiler CLSID: 'AppDynamics.AgentProfiler'. Process ID (decimal): 110060. Message ID: [0x2507]. When the application does not match an application to be monitored in the config.xml of the Coordinator it will not inject the agent DLLs: .NET Runtime version 4.0.30319.0 - The profiler has requested that the CLR instance not load the profiler into this process. Profiler CLSID: 'AppDynamics.AgentProfiler'. Process ID (decimal): 111500. Message ID: [0x2516].   Both messages are at the level of Information. Neither message is a cause for alarm and is only informational. App Agent status vs Machine Agent status The AppDynamics.Agent.Coordinator reports to the controller and one of the metrics it reports is [Availability]. This metric represents the Machine Agent status on the Controller's Tiers & Nodes page.  The App Agent status is the app agent that is injected into your application. If your application is not running then neither is the app agent. This leads us to the next point regarding IIS applications. Why is my app agent status 0% on IIS applications?   The app agent is injected into your application and shares the application's lifecycle. For IIS, this means the app agent's DLLs are injected into the w3wp process on .NET startup. This can only happen at the startup of the process.  However, app pools are managed by IIS, and the default settings do the following: App pools are not started by default. Traffic must be sent to the application first App pools that have not received any traffic for 20 minutes will be terminated As mentioned earlier, the app agent shares the application's lifecycle, so you can see how these default settings might affect the app agent status that is displayed on the controller.  Two possible scenarios with the default IIS settings can cause the app agent status to show 0%.  App pool was killed by IIS because there was no activity on the application. On the controller, you will see a downward trend in the app agent status during periods of idle activity.  The server was restarted and no traffic is currently being sent to the application. Therefore, no w3wp process has been started so the controller shows a 0% on app agent status.  What are the options for having 100% app agent status? Three settings must be changed to ensure that the app pool is running and remains running regardless of traffic or server restart.  Idle timeout https://learn.microsoft.com/en-us/previous-versions/iis/6.0-sdk/ms525537(v=vs.90) Start Mode https://learn.microsoft.com/en-us/iis/configuration/system.applicationhost/applicationpools/applicationpooldefaults/#:~:text=is%201000.-,startMode,-Optional%20enum%20value IIS Application Initialization (requires IIS 8.0) https://learn.microsoft.com/en-us/iis/get-started/whats-new-in-iis-8/iis-80-application-initialization The Idle Timeout property is responsible for terminating an app pool that has not received traffic after some time (default is 20 minutes). Setting this property to 0 will prevent IIS from terminating the app pool regardless of how long the app pool is idle.  Start Mode set to AlwaysRunning instead of the default value of OnDemand.  IIS Application Initialization requires IIS 8.0. When the server starts, IIS will invoke a fake request to the specified page to start the app pool. Follow the instructions listed in the link above for the detailed steps. What if I cannot modify IIS settings? You can modify the config.xml to monitor the performance counter "Current Application Pool State" which is part of the APP_POOL_WAS category for your particular app pool and create a health rule to trigger in the event that the app pool is in a stopped state. "Current Application Pool State" possible values: Starting Started Stopping Stopped Unknown However, you need to be aware of the following: An app pool can be assigned to multiple sites and applications. There is no way to get a granular scope to a single application unless each IIS application/site uses a unique app pool There are really only three states for the "Current Application Pool State" - Started, Stopped, and Unknown. The in-between states are too quick to capture and report on.  The difference between an app pool and worker process. Having an app pool in a started state does not mean your application and, by extension, the agent is running.  In addition, an app pool in the started state does not mean your application is able to start. For example, .NET runtime errors at startup can prevent the application from starting even though the app pool is started.  I strongly recommend modifying the IIS settings to get a true app agent status and then rely on the "Current Application Pool State" performance counter but this option is available if your circumstances prevent modification of the IIS settings and the limitations above are not a concern.  With the caveats out of the way, let's discuss how to make this change.  Config.xml: <machine-agent> <perf-counters> <perf-counter cat="APP_POOL_WAS" name="Current Application Pool State" instance="MY_APP_POOL_NAME" /> </perf-counters> </machine-agent> Then create a new health to trigger if the app pool state is not in a Started state. 
What does this error mean? We usually observe the log message below in the Application startup logs when the agent is unable to connect with the controller to retrieve the nodeName (in the case o... See more...
What does this error mean? We usually observe the log message below in the Application startup logs when the agent is unable to connect with the controller to retrieve the nodeName (in the case of using reuse.nodeName).  Started AppDynamics Java Agent Successfully. [Thread-0] Tue Apr 02 09:46:04 UTC 2019[INFO]: JavaAgent - Started AppDynamics Java Agent Successfully. 2019-04-02 09:46:09,545 ERROR Recursive call to appender Buffer 2019-04-02 09:46:09,547 ERROR Recursive call to appender Buffer Next steps: Could you please check if any logs are generated under the /opt/appdynamics-java/ver.xxx.xx/logs/  directory and share them if available? If there are no logs, please add the configuration line below under the instrumentatioRules applied to the problematic pod:" customAgentConfig: -Dappdynamics.agent.reuse.nodeName=false -Dappdynamics.agent.nodeName=test   If you are using Cluster Agent version >=23.11.0, to force re-instrumentation, you need to use the additional parameter in the default auto-instrumentation properties: enableForceReInstrumentation: true   apiVersion: cluster.appdynamics.com/v1alpha1 kind: Clusteragent metadata: name: k8s-cluster-agent namespace: appdynamics spec: # cluster agent properties # ... # required to enable auto-instrumentation instrumentationMethod: Env # default auto-instrumentation properties # may be overridden in an instrumentationRule containerAppCorrelationMethod: proxy nsToInstrumentRegex: default defaultAppName: "" enableForceReInstrumentation: true # ADDED # ... # one or more instrumentationRules instrumentationRules: - namespaceRegex: default customAgentConfig: -Dappdynamics.agent.reuse.nodeName=false -Dappdynamics.agent.nodeName=test # ADDED imageInfo: image: "docker.io/appdynamics/java-agent:24.8.1" agentMountPath: /opt/appdynamics imagePullPolicy: Always Afterward, please apply the changes and wait for the cluster agent to implement the new instrumentation. Then, collect the agent logs from the /opt/appdynamics-java/ver.xxx.xx/logs/ directory and attach them to the ticket. How do you collect logs from a Kubernetes pod? 1. Enter the container and pack the agent logs into a tar file. kubectl exec -it pod <pod_name> -- bash cd /opt/appdynamics-java/ver24.x.x.x/logs/ tar -cvf /java-agent-logs.tar test 2. Copy the created tar file. kubectl cp <some-namespace>/<some-pod>:/java-agent-logs.tar ./java-agent-logs.tar I hope this article was helpful/ Łukasz Kociuba
A Step-by-Step Guide to Setting Up and Monitoring Redis with AppDynamics on Ubuntu EC2 Monitoring your Redis instance is essential for ensuring optimal performance and identifying potential b... See more...
A Step-by-Step Guide to Setting Up and Monitoring Redis with AppDynamics on Ubuntu EC2 Monitoring your Redis instance is essential for ensuring optimal performance and identifying potential bottlenecks in real-time. In this guide, we’ll walk through the process of setting up Redis on an Ubuntu EC2 instance and configuring SplunkAppDynamics Redis Monitoring Extension to capture key metrics. Step 1: Setting up Redis on Ubuntu Prerequisites An AWS account with an EC2 instance running Ubuntu. SSH access to your EC2 instance. Installing Redis Update package lists and install Redis: sudo apt-get update sudo apt-get install redis-server​ Verify the installation: redis-server --version​ Ensure Redis is running: sudo systemctl status redis​ Step 2: Installing AppDynamics Machine Agent Download the Machine Agent: Visit AppDynamics and download the Machine Agent for your environment. Install the Machine Agent: Follow the installation steps provided in the AppDynamics Machine Agent documentation. https://docs.appdynamics.com/appd/24.x/24.11/en/infrastructure-visibility/machine-agent/install-the-machine-agent Verify Installation: Start the Machine Agent and confirm it connects to your AppDynamics Controller. Step 3: Configuring AppDynamics Redis Monitoring Extension Clone the Redis Monitoring Extension Repository git clone https://github.com/Appdynamics/redis-monitoring-extension.git cd redis-monitoring-extension Build the Extension sudo apt-get install openjdk-8-jdk maven mvn clean install Locate the  .zip file in the target folder and extract it: unzip target/RedisMonitor-*.zip -d <MachineAgent_Dir>/monitors/ Edit the Configuration File Navigate to the extracted folder and edit config.yml : metricPrefix: "Custom Metrics|Redis" #Add your list of Redis servers here. servers: - name: "localhost" host: "localhost" port: "6379" password: "" #encryptedPassword: "" useSSL: false Restart the Machine Agent .<MachineAgent_Dir>/bin/machine-agent Step 4: Verifying Metrics in AppDynamics Log in to your AppDynamics Controller. Navigate to the Metric Browser. Look for metrics under the path: Custom Metrics|Redis Verify that metrics like used_memory , connected_clients , and keyspace_hits are visible. Conclusion By combining the power of Redis with the advanced monitoring capabilities of AppDynamics, you can ensure your application remains scalable and responsive under varying workloads. Whether you’re troubleshooting an issue or optimizing performance, this setup gives you full visibility into your Redis instance. If you found this guide helpful, please share and connect with me for more DevOps insights!
Comprehensive Guide to RabbitMQ Setup, Integration with Python, and Monitoring with AppDynamics Introduction RabbitMQ is a powerful open-source message broker that supports a variety of messagi... See more...
Comprehensive Guide to RabbitMQ Setup, Integration with Python, and Monitoring with AppDynamics Introduction RabbitMQ is a powerful open-source message broker that supports a variety of messaging protocols, including AMQP. It allows developers to build robust, scalable, and asynchronous messaging systems. However, to ensure optimal performance, monitoring RabbitMQ metrics is crucial. This tutorial walks you through setting up RabbitMQ, integrating it with a Python application, and monitoring its metrics using AppDynamics. Step 1: Setting Up RabbitMQ 1.1 Install RabbitMQ via Docker To quickly get RabbitMQ up and running, use the official RabbitMQ Docker image with the management plugin enabled. Run the following command to start RabbitMQ: docker run -d --hostname my-rabbit --name rabbitmq \ -e RABBITMQ_DEFAULT_USER=guest \ -e RABBITMQ_DEFAULT_PASS=guest \ -p 5672:5672 -p 15672:15672 \ rabbitmq:management Management Console: Accessible at http://localhost:15672 . Default Credentials: Username: guest Password: guest 1.2 Verify the Setup Once the container is running, verify the RabbitMQ server by accessing the Management Console in your browser. Alternatively, test the API endpoint: curl -u guest:guest http://localhost:15672/api/overview This should return RabbitMQ metrics in JSON format. Step 2: Writing a Simple RabbitMQ Producer and Consumer in Python 2.1 Install Required Library Install the pika library for Python, which is used to interact with RabbitMQ: pip install pika 2.2 Create the Producer Script ( send.py ) This script connects to RabbitMQ, declares a queue, and sends a message. import pika # Connect to RabbitMQ connection = pika.BlockingConnection(pika.ConnectionParameters('localhost')) channel = connection.channel() # Declare a queue channel.queue_declare(queue='hello') # Publish a message channel.basic_publish(exchange='', routing_key='hello', body='Hello RabbitMQ!') print(" [x] Sent 'Hello RabbitMQ!'") connection.close() 2.3 Create the Consumer Script ( receive.py ) This script connects to RabbitMQ, consumes messages from the queue, and prints them. import pika # Connect to RabbitMQ connection = pika.BlockingConnection(pika.ConnectionParameters('localhost')) channel = connection.channel() # Declare a queue channel.queue_declare(queue='hello') # Define a callback to process messages def callback(ch, method, properties, body): print(f" [x] Received {body}") channel.basic_consume(queue='hello', on_message_callback=callback, auto_ack=True) print(' [*] Waiting for messages. To exit press CTRL+C') channel.start_consuming() 2.4 Test the Application a. Run the consumer in one terminal: python3 receive.py b. Send a message from another terminal: python3 send.py c. Observe the message output in the consumer terminal. [x] Sent 'Hello RabbitMQ!' [x] Received b'Hello RabbitMQ!' Step 3: Monitoring RabbitMQ with AppDynamics 3.1 Configure RabbitMQ Management Plugin Ensure that the RabbitMQ Management Plugin is enabled (default in the Docker image). It exposes an HTTP API that provides metrics. 3.2 Create a Custom Monitoring Script Use a shell script to fetch RabbitMQ metrics and send them to the AppDynamics Machine Agent. script.sh #!/bin/bash # RabbitMQ Management API credentials USERNAME="guest" PASSWORD="guest" URL="http://localhost:15672/api/overview" # Fetch metrics from RabbitMQ Management API RESPONSE=$(curl -s -u $USERNAME:$PASSWORD $URL) if [[ $? -ne 0 || -z "$RESPONSE" ]]; then echo "Error: Unable to fetch RabbitMQ metrics" exit 1 fi MESSAGES=$(echo "$RESPONSE" | jq '.queue_totals.messages // 0') MESSAGES_READY=$(echo "$RESPONSE" | jq '.queue_totals.messages_ready // 0') DELIVER_GET=$(echo "$RESPONSE" | jq '.message_stats.deliver_get // 0') echo "name=Custom Metrics|RabbitMQ|Total Messages, value=$MESSAGES" echo "name=Custom Metrics|RabbitMQ|Messages Ready, value=$MESSAGES_READY" echo "name=Custom Metrics|RabbitMQ|Deliver Get, value=$DELIVER_GET" 3.3 Integrate with AppDynamics Machine Agent Place the Script: Copy the script.sh script to the Machine Agent monitors directory: cp script.sh <MachineAgent_Dir>/monitors/RabbitMQMonitor/ 2. Create monitor.xml : Create a monitor.xml file to configure the Machine Agent: <monitor> <name>RabbitMQ</name> <type>managed</type> <enabled>true</enabled> <enable-override os-type="linux">true</enable-override> <description>RabbitMQ </description> <monitor-configuration> </monitor-configuration> <monitor-run-task> <execution-style>periodic</execution-style> <name>Run</name> <type>executable</type> <task-arguments> </task-arguments> <executable-task> <type>file</type> <file>script.sh</file> </executable-task> </monitor-run-task> </monitor> 3. Restart the Machine Agent: Restart the agent to apply the changes: cd <MachineAgent_Dir>/bin ./machine-agent & Step 4: Viewing Metrics in AppDynamics Log in to your AppDynamics Controller. Navigate to Servers > Custom Metrics. Look for metrics under: Custom Metrics|RabbitMQ You should see metrics like: Total Messages Messages Ready Deliver Get
How to Redirect Smart Agent Temporary Files to Avoid /tmp Space Limitations on Linux Installing any APM or Machine agent with Smart Agent on your Linux box will use /tmp directory to copy agent bin... See more...
How to Redirect Smart Agent Temporary Files to Avoid /tmp Space Limitations on Linux Installing any APM or Machine agent with Smart Agent on your Linux box will use /tmp directory to copy agent binaries before moving them to the intended directory. The problem You can get an error like below: error message = Error extracting Machine Agent in staging: reading file in zip archive: /tmp/.staging/machine-agent/jre/lib/modules: writing file: write /tmp/.staging/machine-agent/jre/lib/modules: no space left on device Error creating Machine Agent service: error installing service: error moving service file to destination: rename /tmp/.staging/appdynamics-machine-agent.service /etc/systemd/system/appdynamics-machine-agent.service: invalid cross-device link These errors are caused due to less space in /tmp folder or /tmp directory mounted on an external device. How to fix it You need to have Smart Agent use any other directory on our host then tmp. To do this: Go to <Smart-Agent-Home-Directory>, In my case the directory is /opt/appdynamics/appdsmartagent Commands in order: cd /opt/appdynamics/appdsmartagent ./smartagentctl stop export TMPDIR=/opt/appdynamics ./smartagentctl start Now in your logs, you will see {"severityText":"INFO","timestamp":"2024-10-04T16:30:29.692Z","name":"native","caller":"machine/task_helper.go:48","body":"downloaded file to ","downloaded file":"/opt/appdynamics/.staging/download/machineagent-bundle-64bit-linux-24.9.0.4408.zip"} {"severityText":"INFO","timestamp":"2024-10-04T16:30:29.692Z","name":"native","caller":"machine/task_helper.go:161","body":"Extracting zip","package.name":"8a5e85401b3a01ac5dadd6394c235dbf032ffa04;MACHINE_AGENT","src path":"/opt/appdynamics/.staging/download/machineagent-bundle-64bit-linux-24.9.0.4408.zip","dest path":"/opt/appdynamics/.staging/machine-agent"} This means, Smart Agent is now copying everything in /opt/appdynamics directory.
Why do I need to collect the debug-level log file? The Java agent by default logs the entries at the info level. Sometimes, the debug-level log files are necessary to investigate an experienced iss... See more...
Why do I need to collect the debug-level log file? The Java agent by default logs the entries at the info level. Sometimes, the debug-level log files are necessary to investigate an experienced issue. Debug-level logging logs are more insightful entries that can be later used to identify the root cause of the experienced issue. There are two ways you can collect the agent log files at the desired logging level. From the AppDynamics controller UI. From the server, where the agent was installed. Collect the Java agent log files from the AppDynamics controller UI. Log into the controller UI. Select the problematic app. Open the 'Tiers & Nodes' dashboard. Select the problematic node. Select the 'Agents' tab. Scroll down to the 'Agent Operations' section and click on the 'Request Agent Log Files' button.  Set the logging level properties. Logger Name: com.singularity Logger Level: Debug Duration (minutes): at least 5 Click on the 'Request Agent Log Files' button to start the log files collection. If it is a test environment, please make sure to generate the load on the app during the log file collection. Collect the Java agent log files from the server, where the agent was installed. (optional) Delete the '/<java-agent-home>/<version>/logs/<node-name>/' directory. Edit the '/<java-agent-home>/<version>/conf/logging/log4j2.xml' file. Change the logging level as in the example below. <!-- to control the logging level of the agent log files, use the level attribute below. value="all|trace|debug|info|warn|error"--> <AsyncLogger name="com.singularity" level="debug" additivity="false">     <AppenderRef ref="Default"/>     <AppenderRef ref="RESTAppender"/> </AsyncLogger> Apply the load on the app (if it is a test environment) for at least 5 minutes. Zip the '/<java-agent-home>/<version>/logs/<node-name>/' directory. Revert the change. I hope this article was helpful. Feel free to ask in case of any questions.
If you're working with Cisco AppDynamics Smart Agent and need a simple way to host your installation files, Python offers a built-in HTTP server to get you up and running in minutes. This lightweight... See more...
If you're working with Cisco AppDynamics Smart Agent and need a simple way to host your installation files, Python offers a built-in HTTP server to get you up and running in minutes. This lightweight web server is ideal for quickly sharing files within your network without the hassle of configuring a full-blown web server like Apache or Nginx. In this guide, we'll walk through the steps to set up a web server using Python 3.11 to host your Smart Agent installation files. Why Use Python’s HTTP Server? The http.server module in Python allows you to serve files over HTTP directly from your file system. It's a great tool for: Quick file sharing: No installation or configuration of additional software is required. Lightweight: Perfect for small-scale local development or testing environments. Cross-platform: Works on any system where Python is installed (Linux, Windows, macOS). Prerequisites Python 3.11: Ensure that you have Python 3.11 installed on your system. You can check by running: bash python3.11 --version If you don't have Python 3.11 installed, you can download it from the official Python website. Cisco AppDynamics Smart Agent installation files: These files should be available in the directory you plan to host. Typically, these will be .deb or .rpm files such as appdsmartagent_<architecture>_<platform>_<version>.deb or appdsmartagent_<architecture>_<platform>_<version>.rpm. Step-by-Step Guide 1. Organize Your Files First, create a directory on your system to store the Smart Agent installation files. For this guide, let's assume you create a directory named appd-agent-files. bash mkdir ~/appd-agent-files  Next, move the installation files into this directory. These could be .deb or .rpm files depending on your deployment platform. bash mv appdsmartagent_* ~/appd-agent-files 2. Start the Python HTTP Server Navigate to the directory where your installation files are located and run the following command to start the Python HTTP server on port 8000: bash cd ~/appd-agent-files python3.11 -m http.server 8000 This will start an HTTP server that serves the files in the current directory at http://<your-server-ip>:8000. Replace <your-server-ip> with the actual IP address or hostname of the machine running the server. 3. Access the Web Server Once the server is running, you can access the hosted files by opening a web browser or using a tool like curl or wget to download the files. For example, to download a file named appdsmartagent_x86_64_debian_21.10.deb, you can run: bash wget http://<your-server-ip>:8000/appdsmartagent_x86_64_debian_21.10.deb This will download the Smart Agent installation file to your local machine. You can also no navigate to the hosts IP address from a web browser.  Security Considerations Python’s HTTP server is easy to set up but lacks advanced security features like SSL/TLS, user authentication, or access controls. It is best suited for internal or development environments. For production deployments, consider more secure options such as Nginx or Apache. Additionally, always be mindful of your organization's security policy and posture to ensure that using a lightweight solution like this aligns with internal security guidelines. Safe Use Cases for Python’s HTTP Server While the Python HTTP server is lightweight and intended for short-term use, it works well in the following scenarios: Local Development and Testing: Ideal for quickly sharing files or testing deployments in isolated, controlled environments, such as hosting Cisco AppDynamics Smart Agent files on a local machine or test server. Short-Term File Sharing: Suitable for temporary hosting during specific tasks like setup or testing. Simply stop the server with Ctrl + C when done. Internal Networks: Safe to use within secure internal networks where access is restricted and traffic is monitored by tools like Cisco’s ThousandEyes or AppDynamics. Always ensure that using this method fits within your organization’s security posture and policies.
Currently, InfraViz doesn't let you deploy Custom extensions. If you wish to deploy custom extensions on Kubernetes using machine agents then this article is for you. This can be done in 2 ways: ... See more...
Currently, InfraViz doesn't let you deploy Custom extensions. If you wish to deploy custom extensions on Kubernetes using machine agents then this article is for you. This can be done in 2 ways: Creating a new Machine Agent Image Creating a new yaml file for Machine Agent Creating a new Machine Agent Image Now if you wish to use this method, which is modifying the Machine Agent image you need to take a step back and ask yourself: Do you need the extension on all nodes? If not, then if you deploy InfraViz in default by just updating the Image with extension then on the node where it works, everything will be fine but on others, you will have logs filled with ERROR/WARN messages which can potentially lead to Machine agent collector script timing out. Do you need a Machine Agent on all nodes? If not, then we are okay. We can use NodeSelector property of InfraViz and simply deploy this new Image using InfraViz on the specific node. In any case, the Dockerfile will look like below: FROM ubuntu:latest # Install curl and unzip RUN apt-get update && apt-get install -y curl unzip procps # Add and unzip the Machine Agent bundle ADD machineagent-bundle-64bit-linux-23.7.0.3689.zip /tmp/machineagent.zip RUN unzip /tmp/machineagent.zip -d /opt/appdynamics && rm /tmp/machineagent.zip # Set environment variable for Machine Agent home ENV MACHINE_AGENT_HOME /opt/appdynamics # Add AWS API Gateway Monitor and start-appdynamics script ADD create-open-file-extension-folder /opt/appdynamics/monitors ADD start-appdynamics ${MACHINE_AGENT_HOME} # Make start-appdynamics script executable RUN chmod 744 ${MACHINE_AGENT_HOME}/start-appdynamics # Set Java Home environment variable ENV JAVA_HOME /opt/appdynamics/jre/bin/java # Run AppDynamics Machine Agent CMD ["/opt/appdynamics/start-appdynamics"] NOTE: In the same directory as Dockerfile You need to have the appdynamics zip in your local. In my case, I have machineagent-bundle-64bit-linux-23.7.0.3689.zip in my local create-open-file-extension-folder is the extension folder which I am moving to /opt/appdynamics/monitors, This has my script.sh and monitor.xml file Remember for extensions, the Machine agent looks for folders and files in monitors directory. start-appdynamics.sh script. This is the content of start-appdynamics.sh script. You will need to edit it and add your Controller configuration. MA_PROPERTIES="-Dappdynamics.controller.hostName=xxx.saas.appdynamics.com" MA_PROPERTIES+=" -Dappdynamics.controller.port=443" MA_PROPERTIES+=" -Dappdynamics.agent.accountName=xxxx" MA_PROPERTIES+=" -Dappdynamics.agent.accountAccessKey=xx" MA_PROPERTIES+=" -Dappdynamics.controller.ssl.enabled=true" MA_PROPERTIES+=" -Dappdynamics.sim.enabled=true" MA_PROPERTIES+=" -Dappdynamics.docker.enabled=false" MA_PROPERTIES+=" -Dappdynamics.docker.container.containerIdAsHostId.enabled=true" # Start Machine Agent ${MACHINE_AGENT_HOME}/jre/bin/java ${MA_PROPERTIES} -jar ${MACHINE_AGENT_HOME}/machineagent.jar Great. Now all you need to do is build the Image and push it to your repository. Once done, in the InfraViz section, Update the Image section of InfraViz with this new Image Creating a new yaml file for Machine Agent Now the second option is using deployment and InfraViz together. I have created a infraviz-deployment.yaml file. This is a deployment that I am deploying on a specific node. apiVersion: apps/v1 kind: Deployment metadata: name: machine-agent-extension labels: app: machine-agent-extension spec: replicas: 1 selector: matchLabels: app: machine-agent-extension template: metadata: labels: app: machine-agent-extension spec: initContainers: - name: create-open-file-extension-folder image: busybox command: ['sh', '-c', 'mkdir -p /opt/appdynamics/monitors/open-file-extension && cp /tmp/config/* /opt/appdynamics/monitors/open-file-extension && chmod +x /opt/appdynamics/monitors/open-file-extension/script.sh'] volumeMounts: - name: config-volume mountPath: /tmp/config # Mount ConfigMap here temporarily - name: open-file-extension mountPath: /opt/appdynamics/monitors/open-file-extension # Target directory in emptyDir containers: - name: machine-agent-extension image: appdynamics/machine-agent:latest ports: - containerPort: 9090 env: - name: APPDYNAMICS_CONTROLLER_HOST_NAME value: "xxxx.saas.appdynamics.com" - name: APPDYNAMICS_CONTROLLER_PORT value: "443" - name: APPDYNAMICS_AGENT_ACCOUNT_NAME value: "xxx" - name: APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY value: "xxx" - name: APPDYNAMICS_SIM_ENABLED value: "true" - name: APPDYNAMICS_CONTROLLER_SSL_ENABLED value: "true" volumeMounts: - name: open-file-extension mountPath: /opt/appdynamics/monitors/open-file-extension volumes: - name: config-volume configMap: name: open-file-extension-config # ConfigMap holding script.sh and monitor.xml - name: open-file-extension emptyDir: {} # EmptyDir to allow read/write nodeSelector: kubernetes.io/hostname: "ip-222-222-222-222.us-west-2.compute.internal" --- apiVersion: v1 kind: ConfigMap metadata: name: open-file-extension-config namespace: default data: script.sh: | #!/bin/bash # Get the current open files limit for the process open_files_limit=$(ulimit -n) ##Commentlineforcheck # Output the open files limit to stdout echo "name=Custom Metrics|OpenFilesLimitMonitor|OpenFilesLimit,value=$open_files_limit" monitor.xml: | <monitor> <name>OpenFile</name> <type>managed</type> <enabled>true</enabled> <enable-override os-type="linux">true</enable-override> <description>OpenFile</description> <monitor-configuration></monitor-configuration> <monitor-run-task> <execution-style>periodic</execution-style> <name>Run</name> <type>executable</type> <task-arguments></task-arguments> <executable-task> <type>file</type> <file>script.sh</file> </executable-task> </monitor-run-task> </monitor> Right now I am only monitoring one node though, so we will use InfraViz to monitor other nodes now. Taint the nodes that I don't want InfraViz to run now which is going to be above one ^ kubectl taint node ip-222-222-222-222.us-west-2.compute.internal machine-agent=false:NoSchedule Now, I can deploy InfraViz.yaml normally and it won't be deployed on ip-222 node. Now you have MA running on all of your nodes, one with extension, and the rest normally Please reach out to Support if you have any questions.
Step-by-Step Guide to Deploying AppDynamics Smart Agent Using Ansible on Linux Systems This article will guide you through installing the AppDynamics Smart Agent on a Linux system using Ansible. ... See more...
Step-by-Step Guide to Deploying AppDynamics Smart Agent Using Ansible on Linux Systems This article will guide you through installing the AppDynamics Smart Agent on a Linux system using Ansible. It covers downloading, configuring, and starting the Smart Agent in an automated fashion. This setup ensures that the Smart Agent is correctly configured for your environment. Prerequisites: Ansible Installed: Make sure Ansible is installed on the machine where you are running the playbook. Sudo Privileges: The playbook requires sudo (root) privileges to execute tasks. Download URL: You need a valid download URL for the AppDynamics Smart Agent. You can get this from the AppDynamics download site (replace the provided URL with your own). Steps to Set Up the Ansible Playbook 1. Directory Structure Create a directory structure for the Ansible playbook as follows: 2. Inventory File Define your target machine (localhost in this case) in the inventory file: [appd_agents] localhost ansible_connection=local 3. Ansible Playbook ( playbook.yml ) The main playbook references the smart_agent role. Ensure become: true is set to allow privilege escalation for necessary tasks: --- - hosts: appd_agents become: true roles: - smart_agent 4. Role Tasks ( roles/smart_agent/tasks/main.yml ) The tasks in this role will include downloading, unarchiving, configuring, and starting the Smart Agent. - name: Download AppDynamics Smart Agent using curl command: > curl -L -O -H "Authorization: Bearer <YOUR_AUTH_TOKEN>" "https://download.appdynamics.com/download/prox/download-file/appdsmartagent/<version>/appdsmartagent_64_linux_<version>.zip" args: chdir: /tmp - name: Unarchive the Smart Agent zip unarchive: src: /tmp/appdsmartagent_64_linux_<version>.zip dest: /opt/appdynamics/ remote_src: yes - name: Configure Smart Agent in config.ini replace: path: /opt/appdynamics/config.ini regexp: 'ControllerURL\s*=\s*.*' replace: 'ControllerURL=https://xxxxx.saas.appdynamics.com' - name: Set ControllerPort in config.ini replace: path: /opt/appdynamics/config.ini regexp: 'ControllerPort\s*=\s*.*' replace: 'ControllerPort=443' - name: Set FMServicePort in config.ini replace: path: /opt/appdynamics/config.ini regexp: 'FMServicePort\s*=\s*.*' replace: 'FMServicePort=443' - name: Set AccountAccessKey in config.ini replace: path: /opt/appdynamics/config.ini regexp: '^AccountAccessKey\s*=\s*.*' replace: 'AccountAccessKey=<YOUR_ACCOUNT_ACCESS_KEY>' - name: Ensure AccountName is set in the main section of config.ini lineinfile: path: /opt/appdynamics/config.ini regexp: '^AccountName\s*=' line: 'AccountName=xxxxx' insertafter: '^ControllerPort\s*=.*' - name: Enable SSL in config.ini replace: path: /opt/appdynamics/config.ini regexp: 'EnableSSL\s*=\s*.*' replace: 'EnableSSL=true' - name: Start Smart Agent shell: /opt/appdynamics/smartagentctl start --service > /tmp/log.log 2>&1 become: yes register: output 5. Replace the Download URL and other controller parameter Replace the download URL placeholder with your own Smart Agent download URL from the AppDynamics download site. In the command task for downloading the Smart Agent, replace <YOUR_AUTH_TOKEN> with your AppDynamics authentication token and replace <version> with the appropriate version for your Smart Agent. For example: - name: Download AppDynamics Smart Agent using curl command: > curl -L -O -H "Authorization: Bearer YOUR_AUTH_TOKEN" "https://download.appdynamics.com/download/prox/download-file/appdsmartagent/24.8.0.551/appdsmartagent_64_linux_24.8.0.551.zip" args: chdir: /tmp Replace Controller URL, Controller Port, AccessKey and AccountName with your credentials 6. Running the Playbook To run the playbook, execute the following command: ansible-playbook -i inventory playbook.yml Conclusion This Ansible playbook simplifies downloading, configuring, and running the AppDynamics Smart Agent on a Linux system. Make sure to replace the download URL and account details with your specific values, and you’ll have the agent up and running in no time.
Implementing and Managing Auto Instrumentation with AppDynamics Cluster Agent The Cluster agent uses the init-container approach to instrument apps based on the rule you wish for. It can be used to... See more...
Implementing and Managing Auto Instrumentation with AppDynamics Cluster Agent The Cluster agent uses the init-container approach to instrument apps based on the rule you wish for. It can be used to specifically target apps that belong to a namespace, contain a specific label, or can be for a specific deployment or container name. Again, the Cluster agent can also be tweaked to automatically push one of the 3 APM agents i.e. Java, .NET Core, or Node.JS APM agents. Looking at the requirements for the Cluster agent, there are not any details on how much the resource requirement is if the Cluster agent auto instrumentation is enabled. This is because of the way the auto instrumentation is done.  Technically speaking one single instance of the Cluster agent is capable of instrumenting an infinite number of deployments but in general as mentioned (in the AppDynamics Doc link above), for every 100 pods 50 MB of memory and 100 Milicores of CPU is required by the Cluster Agent. Steps involved in auto instrumentation: The Cluster agent deployed on env and begins checking deployments/statefulsets and replicasets which confirmed to the instrumentation rules configured. The Cluster agent modifies deployments/statefulset and sets them to pending status, and adds the init container and the env variables required for the agent to connect to the controller. The rollout of these modified deployments or statefulset happens and the Cluster agent in general does 2 depth checks on the new created pods to ensure auto instrumentation is complete. First depth check to check for agent binary in /opt/appdynamics-java or /opt/appdynamics-nodejs or /opt/appdynamics-dotnetcore folder. Second depth check to see the APM agent node logs exist - for eg., /opt/appdynamics-java/ver*/logs/. It also does a controller API check the node name exists in UI. If both succeed and the rollout is complete the annotation of the deployment gets updated from pending to successful. If not then it's marked failed and Cluster Agent wont re-target them again.  The init-containers are spawned with a fixed Memory and CPU request and limits cannot be modified - i.e. they are hardcoded. They are hardcoded as the init container lifecycle exists only till the binary is not copied to the new pod. Thus in real life the init container exits before the actual container is loaded and started. Also, since the actual container is always bigger than the init container the scheduler will always schedule pods based on the actual container requirement - which is always supposed to be more than the init container. Having said this the rollout strategy does have a play on the total memory requirement as Cluster agent may create pods which are being spawned with an additional CPU and memory requirement at the start of the pods. This however should be similar to when the app is upgraded as the same rollout strategy comes into play here as well. 
Troubleshooting Agent Registration Failures Due to Unauthorized Errors in Observability Systems There can be a multitude of reasons why an agent is unable to register with the controller but if the... See more...
Troubleshooting Agent Registration Failures Due to Unauthorized Errors in Observability Systems There can be a multitude of reasons why an agent is unable to register with the controller but if the agent logs an HTTP 401 or Unauthorized error in the logs then it can be safely ruled out that the network is the issue as this is usually returned from the controller when the request sent does not fit the allowed criteria for the controller. Unauthorized errors happen when an agent tries to register with improper credentials or the controller has a scope on the rule due to which the agent registration is denied. The 4 distinct params which help the agent to connect to the controller are: Controller host name Controller port Access key Account name. If the controller is TLS enabled then the flag ssl-enabled is crucial and so does the port change.  Each agent takes these params differently, so please refer to the documentation on how to configure the agent.  Python Java Machine agent .NET However, some agents require a different way Cluster agent - first create the secret in the appdynamics namespace  kubectl -n appdynamics create secret generic cluster-agent-secret --from-literal=controller-key='<access-key>' --from-literal=api-user=‘<username@account:password>’ If the above is not done then the CA yaml must have these details populated. If the configuration on the agent side is not the issue then this can be an issue on the controller license i.e. the accesskey you have used. Please ensure that the access key has valid units of a license, as well as there, is no Application or Server scope that is blocking the registration from happening between the agent and controller. A quick way to test is to create a new license from the default and ensure just 1 unit of license is given to that rule. Then use this as the accesskey and try registering the agent. If it succeeds the license rule used before was the culprit. If these still don't help please raise a case with support and provide screenshots of the agent configuration, the logs from the agent end, and a screenshot of the license rule which shows the available units and scopes.
Resolving Issues with Missing Hardware and Custom Metrics in Server Visibility Agents SIM machine agents a.k.a. Server Visibility agents are used to publish Hardware metrics for the underlying no... See more...
Resolving Issues with Missing Hardware and Custom Metrics in Server Visibility Agents SIM machine agents a.k.a. Server Visibility agents are used to publish Hardware metrics for the underlying node or Servers that host applications. One SIM Machine Agent may correlate to multiple APM agents. Also, SIM machine agents may host custom extensions that use the Machine agent to piggyback custom metrics to the controller. In these scenarios, the metrics play a pivotal role in Application to Infrastructure correlation or custom app monitoring via custom extensions. A loss in the metrics from the Machine Agent would mean an actual loss of monitoring and thereby cause an actual revenue loss if not detected and remediated in time. The loss of metrics can be for various reasons such as exceeding the metric limits of the agent or the controller, loss in connectivity, loss in the machine agent process from being devoted to CPU cycles, or issues with memory optimization of the Machine agent. However this article is specifically when the Machine agent is working on the server but we see the non-metrics i.e. total vCPU count, Total memory, and other details such as server tags which are not metrics i.e. variables over time; but not the Hardware or custom metrics which can be reported by the agent.  Metric limits being hit Metric limits exist both at the controller and at the agent side. By default, MA can publish a max of 450 metrics which may not be sufficient if there are a lot of volumes, networks, or process classes configured for the Machine Agent. Luckily agent side metrics can be quickly overridden as mentioned in the docs at https://docs.appdynamics.com/appd/24.x/24.8/en/application-monitoring/administer-app-server-agents/metrics-limits. The controller also has a limit on the account, application and total number of custom metrics which can be registered. If you notice metrics not being registered then one needs to increase the corresponding limit on the controller. The issue with the SIM extension/module being initialized When the MA is started with the SIM flag set to true the MA first tries ensuring a license exists, if it does then the MA registers and then makes some API calls - first for controller server time, second for if the MA is enabled for monitoring in the controller i.e. not disabled. Once these are successful it enables the Servermonitoring extension which is present by default in all Machine agent binary. Now if the Servermonitoring files are corrupt or have an indentation issue you may see a warning in the MA logs: WARN UriConfigProvider - Could not deserialize configuration at file:<MA_HOME>/extensions/ServerMonitoring/conf/ServerMonitoring.yml com.fasterxml.jackson.dataformat.yaml.snakeyaml.error.MarkedYAMLException: while scanning for the next token found character '\t(TAB)' that cannot start any token. (Do not use \t(TAB) for indentation) . . . at [Source: (byte[])"# WARNING: Before making any changes to this file read the following section carefully # # After editing the file, make sure the file follows the yml syntax. Common issues include # - Using tabs instead of spaces # - File encoding should be UTF-8 # # The safest way to edit this file is to copy paste the examples provided and make the # necessary changes using a plain text editor instead of a WYSIWYG editor. The above example was taken when the servermonitoring yml file was modified using tabs instead of spaces as it's a yml file but the general idea is that the Servermonitoring extension must initialize for data to be sent by it. Having these checked will give one more ideas as to why an incomplete set of metrics were sent to the controller via any Machine agent. 
Step-by-Step Guide to Setting Up AppDynamics Smart Agent on Windows Systems Go to the AppDynamics Downloads area in Accounts On the Agent tab, under the area "Type"  Select "Agent Manage... See more...
Step-by-Step Guide to Setting Up AppDynamics Smart Agent on Windows Systems Go to the AppDynamics Downloads area in Accounts On the Agent tab, under the area "Type"  Select "Agent Management" and then download the AppDynamics Smart Agent for Windows For Windows, you require Administrator access to start Smart Agent. Therefore, you cannot start Smart Agent as a process in Windows Once you have downloaded the Smart Agent on Windows box, unzip the content The unzipped content will have the below files as of version 24.8.0 of AppDynamics Smart Agent You need to edit the config.ini file, specifically the below section: ControllerURL=<Your-AppDynamics-Controller-Url> ControllerPort=<Your-AppDynamics-Controller-port> FMServicePort=<Your-AppDynamics-Controller-port> AgentType = <Let-this-be-null> AccountAccessKey=<Your-AppDynamics-Controller-accessKey> AccountName=<Your-AppDynamics-Controller-Account-name> EnableSSL= <True-If-SSL-Is-Enabled-Else-False> Once this is edited, open the CMD prompt go the directory where SmartAgent is downloaded, and run: smartagentctl start --service​ Once this is done, SmartAgent should be installed as a service named “appdsmartagent”. You can confirm this from TaskManager.  Now, if you go to the AppDynamics Controller UI, under Agent Management -> Smart Agent, you will be able to see Smart Agent installed.
AppDynamics Cluster Agent allows you to auto-instrument your Applications running on Kubernetes. The auto-instrumentation injects APM agents on runtime, modifying your deployment spec with an ini... See more...
AppDynamics Cluster Agent allows you to auto-instrument your Applications running on Kubernetes. The auto-instrumentation injects APM agents on runtime, modifying your deployment spec with an init-container of AppDynamics APM agent. You can use different strategies to target Kubernetes deployments, StatefulSet, or DeploymentConfigs. In this article, we will cover instrumenting one deployment by using a label defined on the Deployment level (it will be the same for DeploymentConfig and StatefulSet) Let's take this forward with two sample deployments running in namespace abhi-java-apps-second. My first deployment: --- apiVersion: apps/v1 kind: Deployment metadata: name: tomcat-app-abhi labels: app: tomcat-app-abhi-second namespace: abhi-java-apps-second spec: replicas: 1 selector: matchLabels: app: tomcat-app-abhi template: metadata: labels: app: tomcat-app-abhi spec: containers: - name: tomcat-app-abhi #image: docker.io/abhimanyubajaj98/java-tomcat-sample-app-buildx:latest image: docker.io/abhimanyubajaj98/java-application:latest imagePullPolicy: Always ports: - containerPort: 8080 env: - name: JAVA_TOOL_OPTIONS value: -Xmx512m #- name: APPDYNAMICS_AGENT_UNIQUE_HOST_ID # value: $(cat /proc/self/cgroup | head -1 | awk -F '/' '{print $NF}' | cut -c 16-27) --- apiVersion: v1 kind: Service metadata: name: tomcat-app-service labels: app: tomcat-app-abhi namespace: abhi-java-apps-second spec: ports: - port: 8080 targetPort: 8080 selector: app: tomcat-app-abhi My Second deployment: --- apiVersion: apps/v1 kind: Deployment metadata: name: tomcat-app-abhi-labelmatch labels: app: tomcat-app-abhi-second-labelmatch namespace: abhi-java-apps-second spec: replicas: 1 selector: matchLabels: app: tomcat-app-abhi-labelmatch template: metadata: labels: app: tomcat-app-abhi-labelmatch spec: containers: - name: tomcat-app-abhi #image: docker.io/abhimanyubajaj98/java-tomcat-sample-app-buildx:latest image: docker.io/abhimanyubajaj98/java-application:latest imagePullPolicy: Always ports: - containerPort: 8080 env: - name: JAVA_TOOL_OPTIONS value: -Xmx512m #- name: APPDYNAMICS_AGENT_UNIQUE_HOST_ID # value: $(cat /proc/self/cgroup | head -1 | awk -F '/' '{print $NF}' | cut -c 16-27) --- apiVersion: v1 kind: Service metadata: name: tomcat-app-service-labelmatch labels: app: tomcat-app-abhi-labelmatch namespace: abhi-java-apps-second spec: ports: - port: 8080 targetPort: 8080 selector: app: tomcat-app-abhi-labelmatch Now, my use case is only for instrument deployment tomcat-app-abhi-labelmatch. To do this, I would need to edit my cluster-agent.yaml and add the below specs: instrumentationRules: - namespaceRegex: abhi-java-apps-second labelMatch: - app: tomcat-app-abhi-second-labelmatch tierName: abhiapps language: java imageInfo: image: "docker.io/appdynamics/java-agent:latest" agentMountPath: /opt/appdynamics imagePullPolicy: Always Now, after the deployment is done, only the deployment tomcat-app-abhi-labelmatch will have AppDynamics Java Agent.
Simplify Application Performance Troubleshooting with Log Observer Connect for AppDynamics Log Observer Connect for AppDynamics allows you to access the right logs in Splunk with just one click, al... See more...
Simplify Application Performance Troubleshooting with Log Observer Connect for AppDynamics Log Observer Connect for AppDynamics allows you to access the right logs in Splunk with just one click, all while providing troubleshooting context from AppDynamics.    By integrating Splunk's powerful log analytics with AppDynamics, Log Observer Connect lets you perform in-context log troubleshooting, quickly pinpoint issues, and centralize logs across teams for a single source of truth.  If you want to streamline your troubleshooting process and keep everything running smoothly, this integration could be a game-changer.   Getting started is as straightforward as taking five steps: Ensure there is an appropriate Splunk service account to be used to access relevant Splunk gathered logs. This will be creating a new role with edit_tokens_own and search capabilities. This role will then be assigned to an existing or or new Splunk user that will be used to create the connection within AppDynamics. A new user is recommended to serve as a service account.  Configure the Splunk Universal Forwarder to send application metadata.  Configure the AppDynamics Agents (currently Java, .NET, and Node.js) to enrich log data by setting the system property to appdynamics.enable.log.metadata.enrichment. Configure Cisco AppDynamics for Log Observer Connect by using the (new) user with new role created previously.  Allow Splunk Cloud Platform IP addresses to be used by Cisco AppDynamics.  *NOTE: These steps are high level and our Integration Steps documentation should be consulted when needing to complete this integration.  Configuration Integration Demo Video Additional Resources Take a self-guided test drive with Log Observer Connect Log Observer Connect One Pager Link to Website  Speak to an expert?