All TKB Articles in Learn Splunk

Learn Splunk

All TKB Articles in Learn Splunk

For Java-based agents, quite often, we run into issues with connectivity when either the agent is migrating from an on-prem controller to a SaaS controller or redirecting from one controller to anoth... See more...
For Java-based agents, quite often, we run into issues with connectivity when either the agent is migrating from an on-prem controller to a SaaS controller or redirecting from one controller to another or when the Network team introduces a network tool such as Zscaler or a simple Java upgrade. This is much more pronounced for Java agents than for Machine agents and Database agents, where the JRE is usually bundled with the product. The JRE is the one which is used to start the JVM and as the name suggests this is a Virtual Machine for Java. So whatever exists on the system or node Java ignores that and only trusts what the virtual machine spawned has access to.  For Java, the key here to establish a TLS connection with any server is that the public cert for the server needs to be trusted. For ease of consumption of this article, I will divert to 2 sections as the JRE is usually in control of MA and DB agents. For Java agents: The key parameter that the JVM uses to establish trust with the server is the truststore passed to the JVM. This ideally should have not just the certs for the controller but any secured service the JVM will connect to. Quite often developers will develop apps and pass a custom truststore to the same. This is the root of the issue in most scenarios as by passing a truststore to the JVM we are putting blinders on that JVM and the JVM will not trust the JRE default certs (which is usually at JRE/lib/security/cacerts) and only trust certs in the passed truststore. A quick check of the JVM args should be able to assist if a custom truststore is being passed or not. If not then the default JRE cacerts file is the truststore. If you have a java agent not connecting to the controller then add the public cert of the controller to the truststore. If you don't have a custom truststore being passed to the JVM then check the cacerts file for the JRE and use any tool to open and check if the public cert offered by the controller is present within the file. Network Connectivity Issue Sometimes the certificate is not the issue but network connectivity is. For that we we use a neutral tool such as keytool which exists within each JRE/bin folder to test connectivity as well as show the entire TLS handshake on the terminal. Simply run the following: keytool -printcert -sslserver <host>:<port> -J-Djavax.net.debug=ssl If proxy is involved then you can add proxy details by adding extra JVM args such as -J-DproxyHost and -J-DproxyPort. For eg.,  keytool -printcert -sslserver <insertyourtenant>.saas.appdynamics.com:443 -J-Djavax.net.debug=ssl -J-DproxyHost=10.8.9.234 -J-DproxyPort=443 The above command if it fails then the network team can be involved to check network connectivity between the server and controller. For Machine and Database agent: Since the JRE is usually bundled the chances of global root CAs not being present with the JRE is minimized, however, if you have an on-prem secured controller then the TLS handshake will fail till the public cert is available with the JVM. For that easiest option is to create a jks file holding the public certs and name the same cacerts.jks and place in the conf folder. The agents have a logic in them to add whatever certs you specify in the cacerts.jks file. Please note the file name must be cacerts.jks and if read the logs will mention the same including the actual cert details when debug logging is enabled   TLS handshake capture In case you wish to capture in the logs the actual TLS handshake the JVM establishes with the servers then you need to enable TLS debugging and ensure the stdout and stderr files are actually being logged in a file. Please note for JVMs run as a service the stdout/stderr may be redirected to /dev/null and thus we need to restart the JVM and ensure the redirection is in place. The JVM args are:  -Djavax.net.ssl=debug -Djavax.net.debug=all  For eg. nohup /app/appdynamics/machineagent/bin/machine-agent -Dappdynamics.http.proxyHost=10.8.9.234 -Dappdynamics.http.proxyPort=443 -Djavax.net.ssl=debug -Djavax.net.debug=all > /app/appdynamics/machineagent/bin/nohup.out 2>&1 &
Introduction In Kubernetes, securing communications between different components is essential for maintaining the integrity and confidentiality of your applications. Certificates play a pivotal rol... See more...
Introduction In Kubernetes, securing communications between different components is essential for maintaining the integrity and confidentiality of your applications. Certificates play a pivotal role in this security ecosystem, enabling encrypted communication, and authentication, and ensuring that data remains tamper-proof. However, as Kubernetes environments grow in complexity, tracking which certificates are used by specific deployments can become challenging. This guide provides a comprehensive, secrets-focused approach to identifying and verifying the certificates used by a specific Kubernetes deployment, using the AppDynamics Cluster Agent as a practical example. Understanding Kubernetes Certificates What Are Certificates in Kubernetes? Certificates in Kubernetes are digital documents used to secure communications between different components, such as API servers, nodes, and deployed applications. These certificates are typically X.509 certificates and are used for encryption, authentication, and integrity checks. In Kubernetes, certificates can be issued by a Certificate Authority (CA) and are often managed automatically, although there are scenarios where manual intervention is required. What Are Secrets in Kubernetes? Kubernetes Secrets are objects used to store sensitive information, such as passwords, OAuth tokens, and, importantly, TLS certificates. Storing certificates as secrets allows Kubernetes to manage and distribute them securely to the pods that require them. Common Certificate Use Cases Some common uses of certificates in Kubernetes include: Securing API Server Communications: Ensuring that communications between the Kubernetes API server and nodes are encrypted. Ingress Controllers: Handling SSL/TLS termination for external traffic entering the cluster. Mutual TLS (mTLS) Between Services: Encrypting communication between microservices within the cluster. Application Routes: Securing external access to applications through routes or services. Understanding Kubernetes Secrets Types of Secrets: kubernetes.io/tls: Used specifically for storing TLS certificates. Opaque: A generic type of secret that can store any key-value pair, including certificates. kubernetes.io/dockercfg: Used for storing Docker configuration, particularly for private image registries. kubernetes.io/service-account-token: Used to manage service account tokens. Prerequisites Before you begin, ensure you have the following: kubectl: The command-line tool for interacting with Kubernetes clusters. OpenSSL: A tool for working with SSL/TLS certificates. Access: Sufficient permissions to access and manage secrets in the Kubernetes cluster. AppDynamics Cluster Agent installed: Install the Cluster Agent  Step-by-Step Guide to Identifying Certificates via Secrets Step 1: List All Secrets in the Namespace Certificates are stored within secrets. Start by listing all secrets in the relevant namespace to identify potential candidates. kubectl get secrets -n <namespace> In our case I have deployed Cluster agent in appdynamics namespace, So I will reference appdynamics in subsequent steps. Step 2: Filter Secrets Related to Certificates Kubernetes secrets that store TLS certificates usually have the type kubernetes.io/tls . Filter secrets by this type to narrow down your search. kubectl get secrets -n appdynamics -o json | jq '.items[] | select(.type=="kubernetes.io/tls") | .metadata.name' In AppDynamics, We don’t have a TLS secret by default so the above should return nothing. Step 3: Check Other Secrets If no kubernetes.io/tls secrets are found, inspect other secrets in the namespace to see if they contain certificates. This command will list all the secrets in the namespace, including those that might contain certificate data. kubectl get secrets -n appdynamics NAME TYPE DATA AGE appdynamics-cluster-agent-dockercfg-zvglk kubernetes.io/dockercfg 1 30m appdynamics-cluster-agent-token-5dt8s kubernetes.io/service-account-token 4 30m appdynamics-infraviz-dockercfg-f9ncg kubernetes.io/dockercfg 1 30m appdynamics-infraviz-token-5qpfs kubernetes.io/service-account-token 4 30m appdynamics-operator-dockercfg-v9k7k kubernetes.io/dockercfg 1 30m appdynamics-operator-token-478lb kubernetes.io/service-account-token 4 30m builder-dockercfg-wkdnm kubernetes.io/dockercfg 1 30m builder-token-nvjmq kubernetes.io/service-account-token 4 30m cluster-agent-secret Opaque 1 30m default-dockercfg-6dn5t kubernetes.io/dockercfg 1 30m default-token-m7hkp kubernetes.io/service-account-token 4 30m deployer-dockercfg-25ws4 kubernetes.io/dockercfg 1 30m deployer-token-dd9s9 kubernetes.io/service-account-token 4 30m sh.helm.release.v1.abhi-cluster-agent.v1 helm.sh/release.v1 1 30m Step 4: Inspect the Content of a Secret To examine the contents of a specific secret, such as appdynamics-cluster-agent-dockercfg-zvglk , use the following command: kubectl get secret appdynamics-cluster-agent-dockercfg-zvglk -n appdynamics -o yaml The output will look something like below kubectl get secret appdynamics-cluster-agent-token-5dt8s -n appdynamics -o yaml apiVersion: v1 data: ca.crt: xxxxxxxx namespace: YXBwZHluYW1pY3M= service-ca.crt: xxxxxx== token: xxxxxxx kind: Secret metadata: annotations: kubernetes.io/created-by: openshift.io/create-dockercfg-secrets kubernetes.io/service-account.name: appdynamics-cluster-agent kubernetes.io/service-account.uid: 582c10bc-b3fb-4435-9adb-7c6dfb25c2ff creationTimestamp: "2024-09-03T17:42:20Z" name: appdynamics-cluster-agent-token-5dt8s namespace: appdynamics resourceVersion: "84672" uid: a7cc1cba-27e4-4b46-a33d-212614af9cad type: kubernetes.io/service-account-token Step 5: Decode and Inspect the Certificate To check the details of the certificate (e.g., ca.crt ), decode it, and then use openssl to view its content: echo "Content of ca.crt" | base64 - decode | openssl x509 -noout -text If the certificate is found and properly decoded, the output will resemble: Certificate: Data: Version: 3 (0x2) Serial Number: ... Signature Algorithm: sha256WithRSAEncryption Issuer: OU=openshift, CN=kube-apiserver-lb-signer ... Conclusion This guide has walked you through identifying the certificates used by a specific Kubernetes deployment, focusing on secrets. By following these steps, you can effectively manage and verify the certificates that secure your Kubernetes environments, ensuring the integrity and confidentiality of your applications.
Step-by-Step Guide to Creating Custom JMX Attributes in the MBean Browser From the MBean browser page select a MBean. Inside that you will have many attributes. Choose the attribute for which you... See more...
Step-by-Step Guide to Creating Custom JMX Attributes in the MBean Browser From the MBean browser page select a MBean. Inside that you will have many attributes. Choose the attribute for which you want to create a custom attribute from Eg: ImplementationVersion​ Now click on Configure JMX Metrics Click Add -> JMX Config Provide a name and description and click Save Now select the JMX Config and Add a Rule Here provide a name for the Rule. Metric Path can be any path that you want the attribute to be reported under. Domain name is the Mbean Name. Object Name Match Pattern is the Object Name from MBean Eg: Metric Path -> Tomcat Test Domain -> JMImplementation Object Name Match Pattern -> JMImplementation:type=MBeanServerDelegate​ Under Define Metric from MBean Attributes section, define the MBean Attribute , Metric Name and Metric Getter chain that you want and save it. Eg: MBean Attribute -> ImplementationVersion Metric Name -> ImplementationVersion Metric Getter Chain -> toString().split(\\.).[0]​ Now go to Node Dashboard -> JMX -> JMX Metrics -> View JMX Metrics. Here under JMX you will be able to see the custom JMX attribute that you created
Choosing the Correct Image when the Agent version is => 24 When you are deciding which image to use for the Node.js or Cluster Agent, you can select it based on the title of the image. The namin... See more...
Choosing the Correct Image when the Agent version is => 24 When you are deciding which image to use for the Node.js or Cluster Agent, you can select it based on the title of the image. The naming pattern that AppDynamics uses for the Node.js Agent is intuitive. There are three segments in the tag, with each segment being separated by a hyphen. The first segment refers to the Agent version The second segment refers to the major version of Node.js that the image was built for The last segment refers to the Linux distribution that it is compatible with In the latest version of the Node.js Agent, there are only three versions regarding the distribution, Alma, Alma-Arm64, and Alpine. For all Linux distributions, use the Alma version, unless you're working with Alpine. When deciding between Alma and Alma-Arm64, select Alma-Arm64 for systems with an ARM64 CPU architecture, such as AWS Graviton. For AMD64 systems, choose Alma. Also, please refer to the list below for examples of popular supported Linux distributions. When choosing an image, the following tag "nodejs-agent:24.3.0-14-alpine" indicates that the Agent version is 24.3.0, that Node.js 14 was used to build the image, and that it is intended for an Alpine system. Popular Supported Linux Distributions: RHEL Debian Ubuntu Choosing the Correct Image when the Agent version is < 24 When you are deciding which image to use for the Node.js or Cluster Agent, you can select it based on the title of the image. The naming pattern that AppDynamics uses for the Node.js Agent is intuitive. There are three segments in the tag, with each segment being separated by a hyphen. The first segment refers to the Agent version The second segment refers to the major version of Node.js that the image was built for The last segment refers to the Linux distribution that it is compatible with In version 23.x of the Node.js Agent, there are only three versions regarding the distribution which are Slim, Stretch-Slim, and Alpine. The Slim and Stretch-Slim versions should be used for every Debian-based Linux distribution, and Alpine should be used for Alpine-based distributions. Slim is the smallest Debian-based image optimized for production use, while Stretch-Slim includes a larger set of packages and dependencies from the Debian “stretch” release. When choosing an image, the following tag "nodejs-agent:23.10.0-14-alpine" indicates that the Agent version is 23.10.0, that Node.js 14 was used to build the image, and that it is intended for an Alpine system.
As of [v23.5] AppDynamics APM Java, Node.js, and .NET Agents have been upgraded for dual output, so your existing instrumentation can be consumed by any 3rd party Open Telemetry(OTel) capable backend... See more...
As of [v23.5] AppDynamics APM Java, Node.js, and .NET Agents have been upgraded for dual output, so your existing instrumentation can be consumed by any 3rd party Open Telemetry(OTel) capable backend, such as Splunk Observability Cloud. The application itself doesn’t need to be written to support OpenTelemetry or be re-instrumented in any way. Once you updated your AppDynamics agents, you can enable OTel in the agent and start to push telemetry to your OpenTelemetry collector. To push data to Splunk Observability cloud we recommend the use of the Splunk Distribution of the OTel Collector  In this article...  Re-instrumentation options for OpenTelemetry  Solutions/Processes...  Cloud native applications Existing and legacy applications  What are the steps to instrumenting Additional resources  Re-instrumentation options for OpenTelemetry  There are three options for enabling OTel in your application:  Manually re-instrument your entire application. While this approach necessitates changes to your code, it provides complete flexibility. Since OpenTelemetry is an industry-standard and vendor-independent, this code-level instrumentation only needs to be performed once.  Automatically instrument with an OpenSource OpenTelemetry agent, comparable to an AppDynamics agent. You have the option to utilize the standard OpenSource agents or the Splunk Distribution agent, which offers enhancements and preconfiguration.  Auto-instrument with an AppDynamics agent configured to output OpenTelemetry, allowing you to retain your AppDynamics data and workflows while simultaneously emitting OTel data.  Solutions/Processes  While having multiple options is beneficial, it can sometimes be overwhelming and raise questions about the best choice for specific situations. Although there is no universal solution that fits all scenarios, we aim to provide some high-level guidance and recommendations.  Cloud native applications  For new cloud native applications, it is advantageous to instrument each service natively with OpenTelemetry. As mentioned earlier, OpenTelemetry SDKs and APIs are vendor-neutral, allowing you to send telemetry data to various backends. If you are working with an existing cloud native application, you can also apply auto-instrumentation on a per-service basis. Tools like the OpenTelemetry Operator for Kubernetes simplify this process by enabling instrumentation through annotations when running in Kubernetes environments.  Existing and legacy applications  For existing applications, particularly legacy ones, manual re-instrumentation can be a significant effort. In such cases, you can opt to re-instrument using either an open-source agent or an AppDynamics agent.  If your legacy application is already instrumented with AppDynamics, the simplest approach is to reconfigure it to output OpenTelemetry data instead of switching to an open-source agent. This allows you to retain all your existing configurations, troubleshooting workflows, dashboards, alerts, and notifications while adding OpenTelemetry export capabilities. Additionally, this setup enables AppDynamics agents to correlate data between services instrumented with both OpenTelemetry and AppDynamics, providing comprehensive end-to-end visibility.  AppDynamics Smart Agent Management features make it easier to install, manage, configure, and maintain compared to managing open-source agents.  What are the steps to instrumenting.... OpenTelemetry can be enabled with some simple steps, find here an example for Java:  Enable OTel using the following configuration flags:  -Dappdynamics.opentelemetry.enabled=true -Dotel.traces.exporter=otlp Set your collector endpoint if the collector is not running on the same host  -Dotel.exporter.otlp.traces.endpoint=http://<your collector ip>:4317 Pass the additional resource attributes to be included, the service.name is a required attribute -Dotel.resource.attributes="service.name=myServiceName,service.namespace=myServiceNameSpace" More information and exact steps, also for .Net and NodeJS, can be found in Instrument Applications with AppDynamics for OpenTelemetry™ documentation.  Additional resources:  Enable OpenTelemetry in the Java Agent  Enable OpenTelemetry in the .NET Agent  Enable OpenTelemetry in the NodeJS Agent  Instrument Ruby Application using OpenTelemetry  AppDynamics Smart Agent  Splunk Distribution of the OpenTelemetry Collector  Splunk Observability Cloud  
How to Encrypt and Secure Your Machine Agent AccessKey Encryption and securing your credentials are of utmost importance now. The Machine Agent lets you configure the encryption of your AccessKey... See more...
How to Encrypt and Secure Your Machine Agent AccessKey Encryption and securing your credentials are of utmost importance now. The Machine Agent lets you configure the encryption of your AccessKey. Let's go through the steps.   1. Navigate to <Ma-Home> directory and Let's create a keyStore with below command: jre/bin/java -jar lib/secure-credential-store-tool-1.3.23.jar generate_ks -filename '/opt/appdynamics/secretKeyStore' -storepass 'MyCredentialStorePassword' This will create the keyStore for you. The output should look like: Successfully created and initialized new KeyStore file: /opt/appdynamics/secretKeyStore 2. Let's create a password to access this keyStore: jre/bin/java -jar lib/secure-credential-store-tool-1.3.23.jar obfuscate -plaintext 'MyCredentialStorePassword' The output should look like: s_-001-12-oRQaGjKDTRs=xxxxxxxxxxxxx= 3. Encrypt your AccessKey: jre/bin/java -jar lib/secure-credential-store-tool-1.3.23.jar encrypt -filename /opt/appdynamics/secretKeyStore -storepass 'MyCredentialStorePassword' -plaintext 'xxxxxx' The output should look like: -001-24-mEZsR+xxxxxxxxx==xxxxxxxxxxxx== Great, now let's edit <MA-Home>/conf/controller-info.xml file and edit the accessKey while also adding a few more parameters for the encryption <account-access-key>-001-24-mEZsR+nrScSXlewlZbTQgg==xxxxxxxxx==</account-access-key> <credential-store-password>s_-001-12-xxxxxxx=xxxxxx=</credential-store-password> <credential-store-filename>/opt/appdynamics/secretKeyStore</credential-store-filename> <use-encrypted-credentials>true</use-encrypted-credentials> Great work! You can deploy your Machine Agent now!
Overview   The .NET Agent Ignore Exceptions Configuration allows you to ignore specific errors reported for a Business Transaction by adding the fully classified exception class to the .NET ignore ... See more...
Overview   The .NET Agent Ignore Exceptions Configuration allows you to ignore specific errors reported for a Business Transaction by adding the fully classified exception class to the .NET ignore exceptions list. This guide provides a comprehensive overview of how to configure and troubleshoot ignore exceptions for the .NET Agent.  Contents  Introduction Configuration Steps Troubleshooting Sample Configuration Additional Resources Introduction   Ignoring specific exceptions can help streamline your monitoring process by filtering out non-critical errors. This ensures that only relevant issues are brought to your attention, enhancing the efficiency of your application performance management.  Configuration Steps  Step 1: Identify Exception Details   Navigate to the BT (Business Transactions) snapshot Error Details page to find the exception details.   Step 2: Add Exception to Ignore List Add the Fully Classified exception class to the .NET ignore exceptions list. This configuration is applied at the controller application level and affects all registered Business Transactions. Make sure you select the .NET Tab under the Error Detection to add the Ignore exception rule.  Step 3: Add Specific Exception Messages (Optional)  You can specify an exception message to ignore by defining the class of an exception in the exception chain. Note that the match condition is applied only to the root exception of the chain, not to any nested exceptions. Troubleshooting Ignore Exception Configuration Reviewing Agent Logs The location of the .NET Agent log files varies based on the underlying OS (Operating System):  Windows:  %programdata%\appdynamics\DotNetAgent\Logs  Linux:   tmp/appd/dotnet  Azure Site Extension:  %home%\LogFiles\AppDynamics  Verify Ignore Rules Check if the ignore rule configurations from the Controller are downloaded in the AgentLog.txt entry. Look for entries like: Info ErrorMonitor Setting ignore exceptions to :[System.Net.WebException] Info ErrorMonitor Setting ignore message patterns to :[SM{ex_type=CONTAINS, ex_pattern='The remote server returned an error: (401) Unauthorized', type=CONTAINS, pattern='The remote server returned an error: (401) Unauthorized', inList=System.String[], regexGroups=[]}] Locate the Exception Key The Ignore exceptions work based on the key sent by the agent for a specific exception. In AgentLog.txt, find the Exception Key in an entry like: Info ErrorProcessor Sending ADDs to register [ApplicationDiagnosticData{key='System.Net.WebException:', name=WebException, diagnosticType=ERROR, configEntities=null, summary='System.Net.WebException'}] Validating Exception Keys Validate the exception key (e.g., key='System.Net.WebException:') entry seen in the AgentLog.txt file against the Ignore Exception configuration in your controller application. Modify/correct the configuration in your controller as needed and verify. Sample Ignore Exception Configuration Scenario Let's use the System.AggregateException with an inner exception of SmallBusiness.Common.SmallBusinessException as an example. You want to ignore this exception only when the SmallBusiness.Common.SmallBusinessException has a specific message, such as "This is a known issue." Here’s an example of how the System.AggregateException and SmallBusiness.Common.SmallBusinessException might be used in your application: try { // Some code that might throw an exception throw new System.AggregateException(new SmallBusiness.Common.SmallBusinessException("This is a known issue")); } catch (System.AggregateException ex) { // Handle the exception Console.WriteLine(ex.Message); } Fully Classified Class Name When dealing with nested exceptions as above, the fully classified class name includes both the outer and inner exceptions to uniquely identify the specific error scenario. Outer Exception: System.AggregateException Inner Exception: SmallBusiness.Common.SmallBusinessException In this case, the fully classified class name is: System.AggregateException:SmallBusiness.Common.SmallBusinessException Log Entry in Agent Log When the exception is thrown, you see an entry in the AgentLog.txt like this: Info ErrorProcessor Sending ADDs to register [ApplicationDiagnosticData{key='System.AggregateException:SmallBusiness.Common.SmallBusinessException:', name=AggregateException : SmallBusinessException, diagnosticType=ERROR, configEntities=null, summary='System.AggregateException caused by SmallBusiness.Common.SmallBusinessException: This is a known issue'}] Ignore Exception Configuration The match condition is applied only to the root exception of the chain. Here the System.AggregateException is thrown with an inner exception of SmallBusiness.Common.SmallBusinessException and the message "This is a known issue," will be ignored by the .NET Agent. The match condition will not apply to nested exceptions' messages unless they are the root exception. Here’s how the Ignore Exception rule would look in the Controller configuration: Fully Qualified Class Name: System.AggregateException:SmallBusiness.Common.SmallBusinessException: Exception Message: Is Not Empty Corresponding Configuration Entry in Agent logs: Info ErrorMonitor Setting ignore exceptions to :[System.AggregateException:SmallBusiness.Common.SmallBusinessException] Info ErrorMonitor Setting ignore message patterns to :[SM{ex_type=NOT_EMPTY, ex_pattern='', type=NOT_EMPTY, pattern='', inList=System.String[], regexGroups=[]}, SM{ex_type=NOT_EMPTY, ex_pattern='', type=NOT_EMPTY, pattern='', inList=System.String[], regexGroups=[]}] Additional Resources AppDynamics Errors and Exceptions AppDynamics Error Detection
Streamline Troubleshooting with Log Observer Connect: AppDynamics + Splunk Integration   CONTENTS | Introduction | Video |Resources | About the presenter  Video Length: 3 min 27 seconds  ... See more...
Streamline Troubleshooting with Log Observer Connect: AppDynamics + Splunk Integration   CONTENTS | Introduction | Video |Resources | About the presenter  Video Length: 3 min 27 seconds  Log Observer Connect for AppDynamics helps your team quickly identify and resolve issues by integrating full-stack APM with Splunk's log analysis. This integration centralizes log collection in Splunk and allows for contextual analysis within AppDynamics, streamlining troubleshooting and reducing operational costs.  Watch this demo by Leandro, a Cisco AppDynamics Advisory Sales Engineer, to see it in action.   Additional Resources  Learn more about Log Observer Connect in the blog and documentation, including  Introducing Log Observer Connect for AppDynamics Log Observer Connect Documentation About presenter Leandro de Oliveira e Ferreira Leandro is an Advisory Sales Engineer at Cisco, having joined the company in 2021. With a decade of experience in the observability space, he has honed expertise in OpenTelemetry, Java, Python, and Kubernetes. Throughout his career, he has been instrumental in guiding clients from various industries through their digital transformation challenges. Before joining Cisco, Leandro held key roles at IBM, CA Technologies and Broadcom where he contributed significantly to advancing observability practices across complex environments.
Leverage these resources to set up your free 30-day trial and master AppDynamics  Watch the video Unlock the Benefits and Product Value of AppDynamics  Read up on getting started with this... See more...
Leverage these resources to set up your free 30-day trial and master AppDynamics  Watch the video Unlock the Benefits and Product Value of AppDynamics  Read up on getting started with this Deployment Planning Guide  Set up your free Cisco U. eLearning account here and discover the AppDynamics Learning Path   Join the AppDynamics Community: join discussions, ask questions, deep dive into technical knowledge base articles, and learn from other customers  Access help and support for additional assistance     Watch these video series:   Watch the Success Tips video series here   View the videos on Introduction to AppDynamics Introduction to Monitoring: Learn the essentials of application monitoring with Cisco AppDynamics. Managing Business Transactions: Apply best practices for Business Transactions configuration. Troubleshooting Tools: Explore troubleshooting techniques to identify and resolve issues quickly.   See how customers are using AppDynamics:  Retail Use Case: See how Carhartt, a leading retailer, transforms their business with enhanced connectivity for staff and superior experiences for customers. See Carhartt story   Government Success Story: Learn how Indiana's Office of Technology improved time to resolution, lowered costs, and enhanced resilience with end-to-end visibility. See Indiana story  Hospitality Sector Case Study: Explore how Royal Caribbean created exceptional experiences that guests can count on from booking to boarding by improving performance of business-critical applications. And they reduced mean time to resolution (MTTR) by 50%. See Royal Caribbean story 
In today's fast-paced and highly competitive business landscape, organizations rely on robust and efficient enterprise resource planning (ERP) systems to streamline operations, enhance productivity, ... See more...
In today's fast-paced and highly competitive business landscape, organizations rely on robust and efficient enterprise resource planning (ERP) systems to streamline operations, enhance productivity, and drive growth. SAP is one of the leading ERP solutions adopted by enterprises worldwide due to its comprehensive suite of applications that cater to various business needs, including finance, logistics, human resources, and supply chain management. However, the complexity and criticality of SAP environments necessitate continuous monitoring to ensure optimal performance, security, and compliance. Monitoring an SAP environment involves tracking system health, performance metrics, and user activities to identify potential issues before they escalate into significant problems. This proactive approach not only helps maintain system reliability and efficiency but also safeguards sensitive business data and supports regulatory compliance. Comprehensive Visibility SAP environments are inherently complex, comprising multiple interconnected components that collectively support critical business functions. This complexity often results in fragmented visibility, making it challenging for IT teams to monitor the entire ecosystem effectively.  While SAP's native monitoring tools such as Solution Manager, CCMS monitoring and Focus Run are robust and well-suited for managing SAP-specific components, they come with certain limitations that can be challenging for organizations with heterogeneous IT landscapes.  They lack comprehensive visibility into non-SAP components, third-party applications, and external services that interact with the SAP environment.  This leads to fragmented monitoring and potential blind spots, like tracing transactions end-to-end, making it difficult to diagnose performance bottlenecks or errors that span beyond SAP components.  SAP Basis Administrators use a variety of transaction codes to troubleshoot an issue, below are some of them. AppDynamics excels in delivering comprehensive visibility across both SAP and non-SAP components, ensuring that every aspect of the system is monitored and optimized.  AppDynamics allows for end-to-end tracing of business transactions as they flow through various components of the SAP environment. This means that every user action, from the initial request to the final response, can be tracked across different modules, databases, and external services. All this in real time while being baselined, this granular level of visibility helps in pinpointing exactly where performance bottlenecks or errors occur, enabling faster and more accurate troubleshooting. Most of this is done in the background with no user interaction, then laid out in various ways for easy identification of issues. Daily/Monthly/Quarterly/Yearly Check Lists By default, SAP systems, like all major systems, need to be looked after to prevent issues from stacking up. These checklists cover various areas within the system, typically including hardware resources, processing utilization, job execution, and updates. These checks are at least performed on production systems and generally take about 10 to 15 minutes per system. This is to ensure the smooth operation of an SAP system and help identify and resolve potential issues before they impact business operations. When problems are detected, it often requires manual troubleshooting and correlation to determine the necessary actions. In such cases, the functional team is involved to coordinate and implement corrective measures. Below is an example of a daily checklist for an ERP system. AppDynamics takes a proactive stance to monitoring these systems, with 35+ dashboards and 350+ metrics/KPIs out-of-the-box.  These checks could now be automated, removing the human error factor.  AppDynamics supplies default Health Rules on key SAP System Metrics and Events for faster set-up on alerts, effectively switching the environment to being proactive. Giving your organization the ability to identify and resolve issues much faster, ensuring that the SAP system runs smoothly and efficiently. This process of maintenance helps preserve system stability, performance, and security, minimizing the risk of disruptions to business operations.  Ensuring your systems operate efficiently and keep up with end-user demand.                                    DB Specific Support SAP provides various tools and functionalities to monitor the databases supporting its environments. While these tools offer valuable insights, they also come with certain boundaries. These native SAP tools often lack comprehensive user experience monitoring capabilities, which are crucial for understanding the end-to-end performance impact on users.  While some historical data analysis is available, it may not be as extensive or detailed as some company’s need.  SAP has been trying to push its customers to move to HANA for some time as they announced support for its ERP systems will end in 2027 unless you move to S/4HANA.  This is no small task, and the SAP management tools leave much to be desired. AppDynamics supports monitoring a wide range of databases commonly used in SAP environments. This ensures comprehensive visibility and performance management across the entire IT landscape.  AppDynamics provides several OOTB dashboards dedicated to databases with 8 specifically for SAP HANA®, greatly reducing the learning curve. This ensures organizations can maintain optimal performance, reliability, and efficiency in their SAP environments. Final Thoughts By leveraging the advanced monitoring, alerting, and automation capabilities of AppDynamics, organizations can significantly reduce or even eliminate many of the manual tasks a Basis person needs to do. AppDynamics provides continuous, real-time visibility into system performance, automates diagnostics and reporting, and proactively alerts IT teams to potential issues to the code level if needed, greatly reducing the time a developer typically gets involved. This not only enhances system reliability and performance but also frees up valuable time for IT staff to focus on strategic initiatives and innovation, rather than routine maintenance tasks.
Having the flexibility to use different AccessKey for different applications you auto-instrument with Cluster Agent is essential. This flexibility was introduced with the latest versions of the Clu... See more...
Having the flexibility to use different AccessKey for different applications you auto-instrument with Cluster Agent is essential. This flexibility was introduced with the latest versions of the Cluster agent. Follow these steps: Un-instrument your application. Create a new secret with the accessKey with the below command: kubectl -n appdynamics create secret generic <secret-name> --from-literal=<custom-Controller-key-name>=<key-value>​ In this command <key-value> will be your AccessKey. For example: kubectl -n appdynamics create secret generic abhi-java-apps --from-literal=controller-key=xxxxx-fb91-4dfc-895a-xxxxx​ I modified my yaml and in the specific instrumentationRule section I added: - namespaceRegex: abhi-java-apps language: java matchString: tomcat-app-abhi-non-root appNameLabel: app runAsUser: 1001 runAsGroup: 1001 customSecretName: abhi-java-apps customSecretKey: controller-key imageInfo: image: "docker.io/appdynamics/java-agent:latest" agentMountPath: /opt/appdynamics imagePullPolicy: Always​ customSecretName is the secret name and customSecretKey is the Key for that secret. After this, I re-instrumented my application, and in the Cluster agent logs I confirmed [INFO]: 2024-08-19 14:45:09 - deploymenthandler.go:262 - custom secretName is %s and is %s %!(EXTRA string=abhi-java-apps, string=controller-key)​ Also, when I exec inside the application pod and grepped for env | grep -i Access, i confirmed that this AccessKey is used: wso2carb@tomcat-app-abhi-non-root-5d558dddf4-rllzc:/usr/local/tomcat/webapps$ env | grep -i Access APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY=xxxxx-fb91-4dfc-895a-xxxxx JAVA_TOOL_OPTIONS=-Xmx512m -Dappdynamics.agent.accountAccessKey=xxxxx-fb91-4dfc-895a-xxxxx -Dappdynamics.agent.reuse.nodeName=true -Dappdynamics.socket.collection.bci.enable=true -javaagent:/opt/appdynamics-java/javaagent.jar​ Additional Resources Use Custom Access Key
It is important to have visibility into how users interact with your brand through websites or mobile apps to find opportunities for increased user retention, brand loyalty, and business growth.. Dig... See more...
It is important to have visibility into how users interact with your brand through websites or mobile apps to find opportunities for increased user retention, brand loyalty, and business growth.. Digital experience monitoring gives you the power to measure every touch point a user has with website or mobile app. Why Digital Experience Monitoring (DEM) is important and why you should care? Digital Experience Monitoring (DEM) is essential for understanding and optimizing how users interact with your digital platforms and is a key part of any modern observability strategy. With real time metrics track, DEM provides insights into performance, usability, and overall user satisfaction, enabling you to identify and resolve issues in your apps faster. Splunk Observability Digital Experience Monitoring Components Highlight Performance Monitoring: Tracks load times, page and route delays, uptime and error rates to ensure fast and reliable performance. Useful to catch errors, or understand if any particular browser or location is involved. User Behavior Analytics: Analyzes user interactions to understand how users navigate and engage with the your website or mobile web app. In this example, our integration automatically captures Core Web Vitals Metrics. To learn more about core web vitals go here: https://developers.google.com/search/docs/appearance/core-web-vitals Synthetic Monitoring: Simulates user interactions to identify potential issues before they affect real users. Splunk Observability offers a full suite of synthetic tests, you can import tests from your chrome browser or write scripts to simulate user navigation. It is possible to create detectors and alarm on important KPIs related to your specific test. Real User Monitoring (RUM): Collects data from actual user sessions to provide insights into real-world performance and user experience. The trace view allows you to explore client and server traces, and advanced filtering makes it easy to filter by any dimension you need. Digital experience analytics: allows customers to quantify user happiness and turn those User experience insights into tangible business outcomes. Cisco's Digital Experience Monitoring solutions combined portfolio provide comprehensive tools for monitoring and optimizing digital experiences. In this example we can see how the platform automatically matches traces from the browser with APM providing easy navigation and access end to end tracing information. Automatically capture every user interaction and correlate data with session replay, giving you quick access to Metrics, Logs, traces and session replay. Does your company have a digital experience monitoring strategy? Reach out, make a comment here and we will try to guide on how Cisco can help. Learn more: Explore Splunk workshops https://splunk.github.io/observability-workshop/latest/en/index.html Explore Splunk Digital Experience Docs https://docs.splunk.com/observability/en/rum/intro-to-rum.html Request: Free Trial https://www.splunk.com/en_us/products/observability.html
.NET Core Application Workflow for Agent Business Transaction Detection Contents Who would use this workflow? How to check and adjust? Resolution Who would use this workflow? If yo... See more...
.NET Core Application Workflow for Agent Business Transaction Detection Contents Who would use this workflow? How to check and adjust? Resolution Who would use this workflow? If you have a .NET core application and the .NET agent is not detecting any Business Transaction even though the application is under load you may need to validate the aspdotnet-core-instrumentation node property.  The .NET agent, by default, assumes that the application is using the default routing mechanism for its .NET core version. However, this is not always the case, and, in some instances, this can prevent the agent from registering Business Transactions.   How to check and adjust?   The simplest way to check would be to review the AgentLog file at the below locations based on the underlying OS  For Windows, the default location is %programdata%\appdynamics\DotNetAgent\Logs For Linux, the default location is tmp/appd/dotnet For Azure Site Extension, the default location is %home%\LogFiles\AppDynamics In the AgentLog file there will be a startup log entry that will list the .NET core version in use as well as the inspected object:  INFO [com.appdynamics.tm.AspDotNetCoreInterceptorFactory] AspNetCore version: 3.1.32.0 (used object type Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.Internal.SocketConnection via Microsoft.AspNetCore.Http.Features.IFeatureCollection [Microsoft.AspNetCore.Http.Features.IFeatureCollection])  Next, the agent will write what the determined routing is:  INFO [com.appdynamics.tm.AspDotNetCoreInterceptorFactory] Determined ASP.NET Core routing being used is Mvc  However, the agent detected the routing as MVC which was deprecated in .NET Core 3, and this is not a viable option.   Refer to this AppDynamics Documentation  Resolution To resolve this issue, we will need to add a node property called aspdotnet-core-instrumentation at the tier level and apply it to all nodes under the tier.  Since our application is using 3.1, we have the following options:  ResourceInvoker Endpoint  HTTP  The different values have their advantages and disadvantages listed here: AppDynamics Docs If the application routing middleware is heavily customized, HTTP might be the only viable option to ensure the required Business Transactions/entry points are captured. 
Configuring URL Display in AppDynamics BT/Transaction Snapshots for .NET MVC Web Apps Issue: The URLs shown in BT/Transaction Snapshots are incomplete. Goal:Need full URL to differentiate slow s... See more...
Configuring URL Display in AppDynamics BT/Transaction Snapshots for .NET MVC Web Apps Issue: The URLs shown in BT/Transaction Snapshots are incomplete. Goal:Need full URL to differentiate slow search requests in the system caused by specific user input.              Full URL example: https://host/Search/userInput Tests: I tested the URL behavior on a .NET MVC web app. Solutions: URL Display on URL Column: While it’s not possible to show the full URL with  http://host/ , we can display the URL as  /Search/userInput . Reference: https://docs.appdynamics.com/appd/23.x/latest/en/application-monitoring/configure-instrumentation/transaction-detection-rules/custom-match-rules/net-business-transaction-detection/name-mvc-transactions-by-area-controller-and-action#id-.NameMVCTransactionsbyArea,Controller,andActionv23.1-MVCTransactionNaming Complete URL Display on BT name Column: It is possible to display the complete URL  https://host/Search/userInput  in the BT name. Reference: https://docs.appdynamics.com/appd/23.x/latest/en/application-monitoring/configure-instrumentation/transaction-detection-rules/uri-based-entry-points Next Steps: For Partial URL in URL column  /Search/userInput : Add App Server Agent Configuration. Set the following .NET Agent Configuration properties to false: aspdotnet-core-naming-controlleraction aspdotnet-core-naming-controllerarea     Restart the  AppDynamics.Agent.Coordinator_service and IIS in the same sequence. After that, apply loads and check the BT/Snapshot if necessary. For Complete URL in BT name  https://host/Search/userInput : Navigate to Configuration > Instrumentation > Transaction Detection in your Application. Add New Rules: Choose  Include , proper Agent type, and Current Entry Point. Fill in the Name Field (it will be shown on your BT). Set Priority higher than Default Automatic detection for prioritization.     Rule Configuration: Matching condition:  URL is not empty Custom Expression: ${HttpRequest.Scheme}://${HttpRequest.Host}${HttpRequest.Path}${HttpRequest.QueryString} Restart the  AppDynamics.Agent.Coordinator_service  and  IIS in the same sequence. After that, apply loads and check the BT/Snapshot if necessary. Final Result: Additional Information: You can also add the custom expression by modifying the default Auto detection rule instead off Add new one like how I did in the step above. Result from modifying the default auto detection below.    
https://github.com/Cisco-Observability-TME/ansible-smartagent-install   Introduction Welcome, intrepid tech explorer, to the ultimate guide on deploying the Cisco AppDynamics Smart Agent across... See more...
https://github.com/Cisco-Observability-TME/ansible-smartagent-install   Introduction Welcome, intrepid tech explorer, to the ultimate guide on deploying the Cisco AppDynamics Smart Agent across multiple hosts! In this adventure, we'll blend the magic of automation with the precision of Ansible, ensuring your monitoring infrastructure is both robust and elegant. So, buckle up, fire up your terminal, and let's dive into a journey that will turn your deployment woes into a seamless orchestration symphony. Steps to Deploy Cisco AppDynamics Smart Agent Step 1: Install Ansible on macOS Before we embark on this deployment journey, we need our trusty automation tool, Ansible. Follow these steps to install Ansible on your macOS system using Homebrew: Install Homebrew (if not already installed): /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" Install Ansible: brew install ansible Verify the Installation: ansible --version You should see output indicating the installed version of Ansible. Step 2: Prepare Your Files and Directory Structure The project directory should contain the following files: ├── appdsmartagent_64_linux_24.6.0.2143.deb ├── appdsmartagent_64_linux_24.6.0.2143.rpm ├── inventory-cloud.yaml ├── inventory-local.yaml ├── inventory-multiple.yaml ├── smartagent.yaml └── variables.yaml Step 3: Understanding the Files 1. inventory-cloud.yaml, inventory-local.yaml, inventory-multiple.yaml These inventory files list the hosts where the Smart Agent will be deployed. Each file is structured similarly: all: hosts: smartagent-hosts: ansible_host: <IP_ADDRESS> ansible_username: <USERNAME> ansible_password: <PASSWORD> ansible_become: yes ansible_become_method: sudo ansible_become_password: <BECOME_PASSWORD> ansible_ssh_common_args: '-o StrictHostKeyChecking=no' Update these placeholders with your actual host details. Explanation of smartagent.yaml This Ansible playbook is designed to deploy the Cisco AppDynamics Smart Agent on multiple hosts. Let's break down each section and task in detail. Playbook Header --- name: Deploy Cisco AppDynamics SmartAgent hosts: all become: yes vars_files: - variables.yaml # Include the variable file name: Describes the playbook. hosts: Specifies that the playbook should run on all hosts defined in the inventory. become: Indicates that the tasks should be run with elevated privileges (sudo). vars_files: Includes external variables from the variables.yaml file. Tasks Ensure required packages are installed (RedHat) - name: Ensure required packages are installed (RedHat) yum: name: - yum-utils state: present update_cache: yes when: ansible_os_family == "RedHat" Uses the yum module to install the yum-utils package on RedHat-based systems. The task runs only if the operating system family is RedHat (when: ansible_os_family == "RedHat"). Ensure required packages are installed (Debian) - name: Ensure required packages are installed (Debian) apt: name: - curl - debian-archive-keyring - apt-transport-https - software-properties-common state: present update_cache: yes when: ansible_os_family == "Debian" Uses the apt module to install necessary packages on Debian-based systems. This task is conditional based on the operating system family being Debian. Ensure the directory exists - name: Ensure the directory exists file: path: /opt/appdynamics/appdsmartagent state: directory mode: '0755' Uses the file module to create the directory /opt/appdynamics/appdsmartagent with the specified permissions. Check if config.ini exists - name: Check if config.ini exists stat: path: /opt/appdynamics/appdsmartagent/config.ini register: stat_config Uses the stat module to check for the existence of config.ini and registers the result in stat_config. Create default config.ini file if it doesn't exist - name: Create default config.ini file if it doesn't exist copy: dest: /opt/appdynamics/appdsmartagent/config.ini mode: '0644' content: | [default] AccountAccessKey="{{ smart_agent.account_access_key }}" ControllerURL="{{ smart_agent.controller_url }}" ControllerPort=443 AccountName="{{ smart_agent.account_name }}" FMServicePort={{ smart_agent.fm_service_port }} EnableSSL={{ smart_agent.ssl | ternary('true', 'false') }} when: not stat_config.stat.exists Uses the copy module to create a default config.ini file if it doesn't exist. The content field uses Jinja2 templating to populate the configuration with variables from variables.yaml. Configure Smart Agent - name: Configure Smart Agent lineinfile: path: /opt/appdynamics/appdsmartagent/config.ini regexp: '^{{ item.key }}=' line: "{{ item.key }}={{ item.value }}" loop: - { key: 'AccountAccessKey', value: "{{ smart_agent.account_access_key }}" } - { key: 'ControllerURL', value: "{{ smart_agent.controller_url }}" } - { key: 'AccountName', value: "{{ smart_agent.account_name }}" } - { key: 'FMServicePort', value: "{{ smart_agent.fm_service_port }}" } - { key: 'EnableSSL', value: "{{ smart_agent.ssl | ternary('true', 'false') }}" } Uses the lineinfile module to ensure specific lines in config.ini are present and correctly configured. Set the Smart Agent package path (Debian) - name: Set the Smart Agent package path (Debian) set_fact: smart_agent_package: "{{ playbook_dir }}/appdsmartagent_64_linux_24.6.0.2143.deb" when: ansible_os_family == "Debian" Uses the set_fact module to define the path to the Smart Agent package for Debian systems. Set the Smart Agent package path (RedHat) - name: Set the Smart Agent package path (RedHat) set_fact: smart_agent_package: "{{ playbook_dir }}/appdsmartagent_64_linux_24.6.0.2143.rpm" when: ansible_os_family == "RedHat" Defines the path to the Smart Agent package for RedHat systems. Fail if Smart Agent package not found (Debian) - name: Fail if Smart Agent package not found (Debian) fail: msg: "Smart Agent package not found for Debian." when: ansible_os_family == "Debian" and not (smart_agent_package is defined and smart_agent_package is file) Uses the fail module to halt execution if the Smart Agent package is not found for Debian systems. Fail if Smart Agent package not found (RedHat) - name: Fail if Smart Agent package not found (RedHat) fail: msg: "Smart Agent package not found for RedHat." when: ansible_os_family == "RedHat" and not (smart_agent_package is defined and smart_agent_package is file) Halts execution if the Smart Agent package is not found for RedHat systems. Copy Smart Agent package to target (Debian) - name: Copy Smart Agent package to target (Debian) copy: src: "{{ smart_agent_package }}" dest: "/tmp/{{ smart_agent_package | basename }}" when: ansible_os_family == "Debian" Uses the copy module to transfer the Smart Agent package to the target host for Debian systems. Install Smart Agent package (Debian) - name: Install Smart Agent package (Debian) command: dpkg -i /tmp/{{ smart_agent_package | basename }} when: ansible_os_family == "Debian" Uses the command module to install the Smart Agent package on Debian systems. Copy Smart Agent package to target (RedHat) - name: Copy Smart Agent package to target (RedHat) copy: src: "{{ smart_agent_package }}" dest: "/tmp/{{ smart_agent_package | basename }}" when: ansible_os_family == "RedHat" Transfers the Smart Agent package to the target host for RedHat systems. Install Smart Agent package (RedHat) - name: Install Smart Agent package (RedHat) yum: name: "/tmp/{{ smart_agent_package | basename }}" state: present disable_gpg_check: yes when: ansible_os_family == "RedHat" Uses the yum module to install the Smart Agent package on RedHat systems. Restart Smart Agent service - name: Restart Smart Agent service service: name: smartagent state: restarted Uses the service module to restart the Smart Agent service to apply the new configuration. Clean up temporary files - name: Clean up temporary files file: path: "/tmp/{{ smart_agent_package | basename }}" state: absent Uses the file module to remove the temporary Smart Agent package files from the target hosts. 3. variables.yaml This file contains the variables used in the playbook: smart_agent: controller_url: 'tme.saas.appdynamics.com' account_name: 'ACCOUNT NAME' account_access_key: 'ACCESS KEY' fm_service_port: '443' ssl: true smart_agent_package_debian: 'appdsmartagent_64_linux_24.6.0.2143.deb' smart_agent_package_redhat: 'appdsmartagent_64_linux_24.6.0.2143.rpm' Explaining the Variables File In Ansible, variables are used to store values that can be reused throughout your playbooks, roles, and tasks. They help make your playbooks more flexible and easier to maintain by allowing you to define values in one place and reference them wherever needed. Variables can be defined in several places, including: Playbooks: Directly within the playbook file. Inventory files: Associated with hosts or groups of hosts. Variable files: Separate YAML files that are included in playbooks. Roles: Within the defaults and vars directories of a role. Command line: Passed as extra variables when running the playbook. Variables can be referenced using the Jinja2 templating syntax, which is denoted by double curly braces {{ }}. The provided variables file is a YAML file that contains a set of variables used in an Ansible playbook. Here is a breakdown of the variables defined in the file: smart_agent: controller_url: 'tme.saas.appdynamics.com' account_name: 'ACCOUNT NAME' account_access_key: 'ACCESS CODE HERE' fm_service_port: '443' ssl: true smart_agent_package_debian: 'appdsmartagent_64_linux_24.6.0.2143.deb' smart_agent_package_redhat: 'appdsmartagent_64_linux_24.6.0.2143.rpm' smart_agent: This is a dictionary (or hash) containing several key-value pairs related to the configuration of a "smart agent". controller_url: The URL of the controller. account_name: The name of the account. account_access_key: The access key for the account. fm_service_port: The port number for the service. ssl: A boolean indicating whether SSL is used. smart_agent_package_debian: The filename of the Debian package for the smart agent. smart_agent_package_redhat: The filename of the Red Hat package for the smart agent. Step 4: Execute the Playbook To deploy the Smart Agent, run the following command from your project directory: ansible-playbook -i inventory-cloud.yaml smartagent.yaml Replace inventory-cloud.yaml with the appropriate inventory file for your setup. And there you have it! With these steps, you're now equipped to deploy the Cisco AppDynamics Smart Agent to multiple hosts with ease. Happy deploying!
Gone are the days of point-and-click monotony – we're going full CLI commando! Whether you're managing a lone server or herding a flock of hosts, this guide will transform you from a nervous newbie t... See more...
Gone are the days of point-and-click monotony – we're going full CLI commando! Whether you're managing a lone server or herding a flock of hosts, this guide will transform you from a nervous newbie to a confident commander of the AppDynamics realm. So grab your favorite caffeinated beverage, fire up that terminal, and let's turn those command-line frowns upside down! Installing the Smart Agent CLI Before we can conquer the world of application monitoring, we need to arm ourselves with the right tools. Let's start by installing the AppDynamics Smart Agent CLI with Python 3.11, our trusty sidekick in this adventure. Hosting the Smart Agent Package on a Local Web Server Before we start spreading the Smart Agent love to multiple hosts, let's set up a local web server to host our package. We'll use Python's built-in HTTP server because, let's face it, who doesn't love a bit of Python magic? Navigate to your Smart Agent package directory: cd /path/to/smartagent/package/ Start the Python HTTP server: python3 -m http.server 8000 Your package is now available at http://your-control-node-ip:8000/smartagent-package-name.rpm Verify with: curl http://your-control-node-ip:8000/smartagent-package-name.rpm --output /dev/null Keep this terminal window open – it's the lifeline for our installation process! 1. Verify Python 3.11 Installation First, let's make sure Python 3.11 is ready and waiting: which python3.11 You should see something like: /usr/bin/python3.11 If Python 3.11 is playing hide and seek, you'll need to find and install it before proceeding. 2. Install the Smart Agent CLI Now, let's summon the Smart Agent CLI using the magical incantation below (adjust the RPM filename to match your version): sudo APPD_SMARTAGENT_PYTHON3=/usr/bin/python3.11 yum install appdsmartagent_cli_64_linux_24.6.0.2143.rpm 3. Verify the Installation Let's make sure our new CLI friend is ready to party: appd --version If you see the version number, congratulations! You've just leveled up your AppDynamics game. Installing Smart Agent on a Single Host Let's start small and install the Smart Agent on a single host. Baby steps, right? Prepare your configuration file (config.ini): [default] controller_url: "your-controller-url.saas.appdynamics.com" controller_port: 443 controller_account_name: "your-account-name" access_key: "your-access-key" enable_ssl: true Installing Smart Agent on Multiple Hosts or Locl Feeling confident? Let's scale up and install the Smart Agent across multiple hosts like a boss! Preparing Your Inventory Before we unleash our Smart Agent army, we need to create an inventory of our target hosts. Here are a couple of examples to get you started: For a simple target with additional ansible variables: [targets] 54.221.141.103 ansible_user=ec2-user ansible_ssh_pass=ins3965! ansible_python_interpreter=/usr/bin/python3.11 ansible_ssh_common_args='-o StrictHostKeyChecking=no' Let's break down the provided hosts.ini file: [targets] 54.221.141.103 ansible_user=ec2-user ansible_ssh_pass=ins3965! ansible_python_interpreter=/usr/bin/python3.11 ansible_ssh_common_args='-o StrictHostKeyChecking=no' Group [targets]: This is a group name. In this case, the group is named targets. You can use this group name in your playbooks to refer to all the hosts listed under it. Host 54.221.141.103: This is the IP address of the host that belongs to the targets group. Host Variables Several variables are defined for the host 54.221.141.103: ansible_user=ec2-user: This specifies the SSH user to connect as. In this case, the user is ec2-user. ansible_ssh_pass=ins3965!: This specifies the SSH password to use for authentication. The password is ins3965!. Note that using plain text passwords in inventory files is generally not recommended for security reasons. It's better to use SSH keys or Ansible Vault to encrypt sensitive data. ansible_python_interpreter=/usr/bin/python3.11: This specifies the path to the Python interpreter on the remote host. Ansible needs Python to be installed on the remote host to execute its modules. Here, it is set to use Python 3.11 located at /usr/bin/python3.11. ansible_ssh_common_args='-o StrictHostKeyChecking=no': This specifies additional SSH arguments. In this case, -o StrictHostKeyChecking=no is used to disable strict host key checking. This means that SSH will automatically add new host keys to the known hosts file and will not prompt the user to confirm the host key. This can be useful in automated environments but can pose a security risk as it makes it easier for man-in-the-middle attacks. This hosts.ini file defines a single host (54.221.141.103) in the targets group with specific SSH and Python interpreter settings. Here's a summary of what each setting does: Connect to the host using the ec2-user account. Use the password ins3965! for SSH authentication. Use Python 3.11 located at /usr/bin/python3.11 on the remote host. Disable strict host key checking for SSH connections. For multiple managed nodes: [managed_nodes] managed1 ansible_host=192.168.33.20 ansible_python_interpreter=/usr/bin/python3 managed2 ansible_host=192.168.33.30 ansible_python_interpreter=/usr/bin/python3 Save your file named hosts respectively. You can adjust the hostnames, IP addresses, and other parameters to match your environment. Executing a Local or Multi-Host Installation We can install on our local host by using the following command. sudo ./appd install smartagent -c config.ini -u http://your-control-node-ip:8000/smartagent-package-name.xxx --auto-start -vvvv Now that we have our targets lined up, let's fire away: sudo ./appd install smartagent -c config.ini -u http://your-control-node-ip:8000/smartagent-package-name.xxx -i hosts -q ssh --auto-start -vvvv Replace with hosts if you're using the multiple managed nodes setup. Verifying Installation Let's make sure our Smart Agents are alive and kicking: Check the service status: sudo systemctl status appdynamics-smartagent Look for new nodes in your AppDynamics controller UI under Infrastructure Visibility. Troubleshooting If things go sideways, don't panic! Check the verbose output, verify SSH connectivity, double-check your config file, and peek at those Smart Agent logs. Remember, every IT pro was once a beginner – persistence is key! There you have it, intrepid AppDynamics adventurer! You've now got the knowledge to install, host, and deploy Smart Agents like a true CLI warrior. Go forth and monitor with confidence, knowing that you've mastered the art of the AppDynamics Smart Agent CLI. May your applications be forever performant and your alerts be always actionable!
Configuring JVM Options to Increase Metric Limits in Machine Agent By default, the Machine agent posts 450 metrics every minute. However, this limit can easily get hit when you configure custom ext... See more...
Configuring JVM Options to Increase Metric Limits in Machine Agent By default, the Machine agent posts 450 metrics every minute. However, this limit can easily get hit when you configure custom extensions, you have a huge number of processes running on your VM or your OS has increased churn. To fix this, we need to add below JVM  -Dappdynamics.agent.maxMetrics=1000 This JVM will increase the metrics the Machine agent will publish on every POST to the controller. To pass this on to Linux you can simply start a Machine agent with this JVM. For example: java -Dappdynamics.agent.maxMetrics=2500 -jar machineagent.jar On Windows, you can edit the MachineAgentService.vmoptions file located in <MA-Home>/bin folder and restart the Machine Agent.
This article covers the steps to enable access/request log in a custom format for a Jetty-based controller, specifically for controllers with a version greater than or equal to v23.11. To begin, pl... See more...
This article covers the steps to enable access/request log in a custom format for a Jetty-based controller, specifically for controllers with a version greater than or equal to v23.11. To begin, please follow these steps: Log in to the EC UI Navigate to configurations → Controller Settings → Appserver Configurations → JVM Options Add the following properties in the JVM Config section --module=customrequestlog jetty.customrequestlog.formatString=%{client}a - %u %{dd/MMM/yyyy:HH:mm:ss ZZZ|GMT}t "%r" %s %O "%{Referer}i" "%{User-Agent}i" jetty.customrequestlog.formatString can be modified according to your specific requirements. Here's the format codes to configure access log format Format String Description %% The percent sign. %{format}a Address or Hostname. Valid formats are {server, client, local, remote} Optional format parameter which will be server by default. Where server and client are the logical addresses which can be modified in the request headers, while local and remote are the physical addresses so may be a proxy between the end-user and the server. %{format}p Port. Valid formats are {server, client, local, remote} Optional format parameter which will be server by default. Where server and client are the logical ports which can be modified in the request headers, while local and remote are the physical ports so may be to a proxy between the end-user and the server. %{CLF}I Size of request in bytes, excluding HTTP headers. Optional parameter with value of "CLF" to use CLF format, i.e. a '-' rather than a 0 when no bytes are sent. %{CLF}O Size of response in bytes, excluding HTTP headers. Optional parameter with value of "CLF" to use CLF format, i.e. a '-' rather than a 0 when no bytes are sent. %{CLF}S Bytes transferred (received and sent). This is the combination of %I and %O. Optional parameter with value of "CLF" to use CLF format, i.e. a '-' rather than a 0 when no bytes are sent. %{VARNAME}C The contents of cookie VARNAME in the request sent to the server. Only version 0 cookies are fully supported. Optional VARNAME parameter, without this parameter %C will log all cookies from the request. %D The time taken to serve the request, in microseconds. %{VARNAME}e The contents of the environment variable VARNAME. %f Filename. %H The name and version of the request protocol, such as "HTTP/1.1". %{VARNAME}i The contents of VARNAME: header line(s) in the request sent to the server. %k Number of keepalive requests handled on this connection. Interesting if KeepAlive is being used, so that, for example, a '1' means the first keepalive request after the initial one, '2' the second, etc...; otherwise this is always 0 (indicating the initial request). %m The request method. %{VARNAME}o The contents of VARNAME: header line(s) in the response. %q The query string (prepended with a ? if a query string exists, otherwise an empty string). %r First line of request. %R The handler generating the response (if any). %s Response status. %{format|timeZone|locale}t The time that the request was received. Optional parameter in one of the following formats {format}, {format|timeZone} or {format|timeZone|locale}.   Format Parameter: (default format [18/Sep/2011:19:18:28 -0400] where the last number indicates the timezone offset from GMT.) Must be in a format supported by  DateCache  TimeZone Parameter: Default timeZone GMT Must be in a format supported by  TimeZone.getTimeZone(String)  Locale Parameter: Default locale  Locale.getDefault()  Must be in a format supported by  Locale.forLanguageTag(String) %T The time taken to serve the request, in seconds. %{UNIT}T The time taken to serve the request, in a time unit given by UNIT. Valid units are ms for milliseconds, us for microseconds, and s for seconds. Using s gives the same result as %T without any format; using us gives the same result as %D. %{d}u Remote user if the request was authenticated with servlet authentication. May be bogus if return status (%s) is 401 (unauthorized). Optional parameter d, with this parameter deferred authentication will also be checked, this is equivalent to  HttpServletRequest.getRemoteUser() . %U The URL path requested, not including any query string. %X Connection status when response is completed: X = Connection aborted before the response completed. + = Connection may be kept alive after the response is sent. - = Connection will be closed after the response is sent. %{VARNAME}^ti The contents of VARNAME: trailer line(s) in the request sent to the server. %{VARNAME}^to The contents of VARNAME: trailer line(s) in the response sent from the server. Click Save The access/request log will be created under <controller-home>/appserver/jetty/logs/ 
Step-by-Step Guide to Enabling DEBUG Logs in Machine Agent Debug logs for Machine Agents are important to help you understand issues and help you fix them. To enable, edit <MA-Home>/conf/loggin... See more...
Step-by-Step Guide to Enabling DEBUG Logs in Machine Agent Debug logs for Machine Agents are important to help you understand issues and help you fix them. To enable, edit <MA-Home>/conf/logging/log4j.xml file. As shared in the screenshot, set the level = "debug" <Logger name="com.singularity" level="debug" additivity="false"> <AppenderRef ref="FileAppender"/> </Logger> <Logger name="com.appdynamics" level="debug" additivity="false"> <AppenderRef ref="FileAppender"/> </Logger> <Logger name="com.singularity.ee.agent.systemagent.components.monitormanager.managed.ManagedMonitorDelegate" level="DEBUG" additivity="false"> <AppenderRef ref="FileAppender"/> </Logger> Once done, save the file The Machine Agent reads this file every few minutes so DEBUG logging will automatically take effect Once complete, logs can be found in the <MA-Home>/logs folder
Updating Cluster Agent Name During Helm Chart Deployment For helm, you will need to add appName under ClusterAgent section It will go under clusterAgent (see example below) clusterAgent:  ... See more...
Updating Cluster Agent Name During Helm Chart Deployment For helm, you will need to add appName under ClusterAgent section It will go under clusterAgent (see example below) clusterAgent:   nsToMonitorRegex: .*   appName: abhi-apm-correlation Remember in helm, the name will be the prefix and namespace will be added after that aka suffix In this example, the name will be->  abhi-apm-correlation-appdynamics Once this is done, please do a re-install