All TKB Articles in Learn Splunk

Learn Splunk

All TKB Articles in Learn Splunk

Integrating AppDynamics Cluster Agent with Crio Containers in OpenShift Environments We can enable correlation between cluster agent pods/containers and your APM node.  Add customAgentConfig to... See more...
Integrating AppDynamics Cluster Agent with Crio Containers in OpenShift Environments We can enable correlation between cluster agent pods/containers and your APM node.  Add customAgentConfig to your cluster-agent.yaml file customAgentConfig: "-Dappdynamics.container.id.prefix=crio"​ Remove instrumentation and do a redeployment for it to take effect The UNIQUE_HOST_ID of this application will become your container ID afterward The problem is that the 1.25 container runtime is Crio and by default, CA/JA looks for docker. If you check the UNIQUE_HOST_ID of your application right now, you will see the pod name as the UNIQUE_HOST_ID. Edit your cluster-agent.yaml file and add the above parameter Your yaml will look like the code example below instrumentationRules: namespaceRegex: abhi-java-apps language: java appNameLabel: app customAgentConfig: "-Dappdynamics.container.id.prefix=crio" Once done remove the instrumentation from your application The quickest way would be to do kubectl delete and kubectl create again of your application Or, you can set  instrumentationMethod: None  in cluster-agent.yaml and do a redeployment.  Once you see your application deployment without an APPD-related ENV variable add the parameter and do a kubectl apply -f / create -f Give it 10mins or so once your application starts reporting to the controller  
Configuring Multiple Machine Agents on Linux and Windows Platforms Linux Machine Agents are started as JVM so you can have "n" numbers of Machine Agents on a Linux box. Make sure to have indi... See more...
Configuring Multiple Machine Agents on Linux and Windows Platforms Linux Machine Agents are started as JVM so you can have "n" numbers of Machine Agents on a Linux box. Make sure to have individual downloads per Machine Agent. Input the details of the Machine Agent in controller-info.xml file Then start Machine Agent by either of the below two commands-> 1. ./<MA-Home>/bin/machine-agent & 2. <MA-Home>/jre/bin/java -jar <MA-Home>/machineagent.jar Here is an example test below: Windows Machine Agents are started as JVM so you can have "n" number of MA's on a Windows box. Ensure individual downloads per machine agent Input the details of the machine agent in controller-info.xml file and then start MA by -> <MA-Home>/jre/bin/java -jar <MA-Home>/machineagent.jar​ Make sure the UNIQUE_HOST_ID of both Machine agents are different  
Ensuring Machine Agent Containers Appear in the AppDynamics UI Containers with APM agents will show up in Application -> Containers view In this example we have Machine Agent running in Amazon Li... See more...
Ensuring Machine Agent Containers Appear in the AppDynamics UI Containers with APM agents will show up in Application -> Containers view In this example we have Machine Agent running in Amazon Linux 2 For this to work: You should have your App Agent UNIQUE_HOST_ID as the first 12 characters of your container ID  Sign in to AppD UI Go to Settings wheel icon on top right -> AppDynamics Agent -> App Agent and search for this Agent Please confirm the unique-host-id for me If the UNIQUE_HOST_ID is the first 12 characters of your Container ID then we would need to pass the below JVM argument and restart your Machine Agent -Dappdynamics.docker.container.containerIdAsHostId.enabled=true You can pass this JVM argument while starting up the Machine agent java -Dappdynamics.docker.container.containerIdAsHostId.enabled=true  -jar machineagent.jar Or you can edit the process file at <MA-Home>/bin/machine-agent and edit: JAVA_OPTS="$JAVAOPTS -Xmx256m -Dappdynamics.docker.container.containerIdAsHostId.enabled=true" ​In the scenario where your UNIQUE_HOST_ID for Java Agent isn't the first 12 characters of your container ID then please provide the below and file a support ticket with AppDynamics docker ps cat /proc/self/cgroup cat /etc/os-release docker exec -it <container-name> hostname Share the JVM arguments/configuration properties you are passing Support will be able to replicate and help provide a fix Ideally, your correlation and complete solution should look like the screenshots attached above.
AppDynamics Machine Agent by default monitors 20 process classes. These process classes are made up of multiple of the same extension of processes and can change depending on which process is referre... See more...
AppDynamics Machine Agent by default monitors 20 process classes. These process classes are made up of multiple of the same extension of processes and can change depending on which process is referred to as top during the time frame selected by the machine agent. The ideal way to use process monitoring using a Machine Agent can be achieved by filtering out the processes. How to filter out the process You will need to modify your ServerMonitoring.yaml file present at <MA-Home>/extensions/ServerMonitoring/conf/ServerMonitoring.yaml file processMonitorConfig: maxClassIdLength : 50 processSelectorRegex : ".*java|.*machineagent.jar|.*amazon.*|.*dockerd.*|.*containerd.*" minLiveTimeMillisBeforeMonitoring: 60000 maxNumberMonitoredClasses : 20 processClassSelectorRegexList : machineAgentTasks: '.*java.*machineagent.jar' containerd: '.*containerd.*' amazon: '.*amazon.*' docker: '.*dockerd.*'​ In the above example: processSelectorRegex is used to select which processes will be monitored by the machine agent and afterward, we are grouping it by using processClassSelectorRegexList.  A screenshot is attached to give you an idea of how it will look. Note: Same extension class example: All Java classes will be grouped together. Machine Agent is a Java process, so I grouped them in processClassSelectorRegexList. Thanks Abhimanyu
Cisco Identity migration support We want to remind you that AppDynamics is transitioning user identities to Cisco’s identity (sign-in credentials). The migration began on May 15th and is expected t... See more...
Cisco Identity migration support We want to remind you that AppDynamics is transitioning user identities to Cisco’s identity (sign-in credentials). The migration began on May 15th and is expected to be completed by June 14th. To ensure you’re up to date with this change, please check your inbox (and spam folder) for an email from no-reply@portal.appdynamics.com. This email will confirm whether your identity has been migrated. If you find this email, simply reset your password on your next login. Detailed instructions are available in our AppD CCO Migration Community Article. If you haven’t received the email yet, don’t worry – your identity migration is coming soon. Experiencing Login Issues? If you're having trouble logging into AppDynamics during this migration period, it’s likely because your identity has been migrated, and you need to reset your password. Follow these steps: Reset Your Password If you received the migration email, reset your password using the link provided. If you didn’t receive the email, your migration is still in progress. Check back later.  Still Having Issues? If resetting your password doesn’t resolve the issue, here’s where you can get help: Contact the CCO ID team by sending an email to: Web-Help@cisco.com You can also find a “Contact Support” link at the bottom of the login screen, which directs you to the same email. For AppDynamics Product Issues Open a case at appdynamics.com/support or reach out via email at: appd-support@cisco.com  We appreciate your patience during this transition and are here to help with any issues you might encounter.
To ensure you meet the prerequisites and get platform support, visit the AppDynamics Agent Installer documentation. Installation Steps: From the Controller UI, select Home > Agent Installer. ... See more...
To ensure you meet the prerequisites and get platform support, visit the AppDynamics Agent Installer documentation. Installation Steps: From the Controller UI, select Home > Agent Installer. From the Specify Application to Deploy to dropdown, select an existing application, or select New application and enter its name. Example: I am creating a new application named “Abhi-ZeroAgent-Test” Download and run the Agent Installer using either the express installation or custom installation method. On the Server: Access the server where you wish to deploy ZFI (Zero Agent). This agent will help you install both the Java and Machine Agents. Copy the provided command and run it on the server where you wish to deploy the Java/Machine agent. This will create an appd-* folder in the /tmp directory. To deploy the Machine Agent, use the zero-agent.sh file. Run the following command from the same server:  ./zero-agent.sh install --application 'Abhi-ZeroAgent-Test' --account 'xxxx' --access-key 'xxx' --service-url 'https://xxxx.saas.appdynamics.com' --enable-sim 'true'​ Once done, your Machine Agent will be installed in the /opt/appdynamics/zeroagent/agents/machineagent/ folder. Making Configuration Changes: If you wish to make any changes to the Machine Agent configuration, make the changes in the /opt/appdynamics/zeroagent/agents/machineagent/ directory. To restart the Machine Agent with the new properties, run the following command from /opt/appdynamics/zeroagent/bin : zfictl restart machine This will restart the Machine Agent with the updated configuration.
Table of Contents How do I open a case with AppDynamics Support Case Opening: When there is only one AppDynamics Subscription associated with the User (most cases) Case Opening: When there ar... See more...
Table of Contents How do I open a case with AppDynamics Support Case Opening: When there is only one AppDynamics Subscription associated with the User (most cases) Case Opening: When there are multiple subscriptions associated with a user  Case severity  Video Tutorials Additional Resources How do I open a case with AppDynamics Support?   First, ensure you can access Cisco SCM with a valid Cisco.com account. If you were part of the migration this should have been done automatically. If you still need to request a Cisco.com account, please refer to the earlier communication about User Identity changes found here.  Make your way to the AppDynamics portal on appdynamics.com/support. When you log in to the AppDynamics portal you will be automatically redirected to Cisco SCM.  Case Opening: When there is only one AppDynamics Subscription associated with the User (most cases)  Navigate to the AppDynamics Portal Link to the Support section (see Figure 1) and click the link “Open a new ticket”.    Figure 1  You will be taken directly to the “Describe problem page” where you will be prompted to enter details of the incident reported (Figure 2). Proceed to select a pre-set sub technology by clicking the “Manually Select a Technology” button.   Figure 2 Choose the Technology that most closely relates to the issue and click the “Select” button. Figure 3  After you submit the case, the system asks if you’d like to receive e-mail updates with details of the ticket and choose to opt in or out (Figure 4).                                                                                 Figure 4  Case Opening: When there are multiple subscriptions associated with a user  Follow the same steps as in the “Case Opening procedure with one subscription associated with a user”, by going to the AppDynamics Portal Link to Support (see Figure 1) and clicking the link to “OPEN SUPPORT TICKETS”.   If there are multiple subscriptions associated with your account, choose the correct subscription number from the menu (Figure 5) then click the “Next” button. Figure 5  SCM detects you being associated with AppDynamics only and reduces the Tech and Sub-Tech to AppDynamics choices (Figure 3).          After you submit the case and if opting for such, the system automatically sends an e-mail with details of the ticket, pointing to the newly opened case in SCM for case management (Figure 4).   Case severity  In conjunction with the migration to Cisco SCM, AppDynamics customers will be making use of the Case severity definition as determined by Cisco in the table below.    Case Severity  Description  Severity 1 (S1)  Critical impact on the customer’s business operations. Cisco’s hardware, software, or as a service product is down.   Severity 2 (S2)  Substantial impact on the customer’s business operations. Cisco hardware, software, or as a service product is degraded.   Severity 3 (S3)  Minimal impact on the customer’s business operations. Cisco hardware, software, or as a service product is partially degraded.  Severity 4 (S4)  No impact on the customer’s business operations. The customer requests information about features, implementation, or configuration for Cisco’s hardware, software, or as a service product.  Note! Post-migration, some functions will be limited. These include:   All open tickets will be migrated to the new system (you will receive a notification with the new case ID).  All tickets closed on or later than May 14th will be available in the new system (you will receive a notification with the new case ID)  Why am I not getting Support Case notifications/emails? Often, case notifications are turned off because users miss setting Case Notifications to "On" while opening a case. As the case creator, you have the ability to enable/disable notifications from the Support Case Manager (SCM) user interface. Enabling notifications ensures that our support engineers' responses are also received via email. Understanding Case Notifications: Case Notifications On: You will receive email updates about the case. Case Notifications Off: You will not receive email updates. In this scenario, you need to check the Support Case Manager for updates: Go to Support Case Manager Navigate to Individual case details -> Notes section Alternatively, by using our Cisco Support Assistant bot. How to Enable Notifications 1. Global Configuration: Go to SCM -> Settings (Gear icon in the top right corner) Note: Global notifications settings have Case Notifications turned off by default. We recommended customers to set this to On at global level. 2. Per-Case Basis: You can also enable notifications on a per-case basis during case creation. Different Case Opening Paths From Account Portal: Opening a case from the Account Portal -> Open new ticket will set the case notifications to "On." From Support Case Manager: Opening a case from the Support Case Manager -> Open a new case will respect the global notifications settings mentioned above Via appd-support@cisco.com -> Case notifications will be "Off" . This does not respect global settings from SCM. Adjusting Notifications After Case Creation You can always turn case notifications on/off after opening the case and before the case closure: Navigate to Support Case Manager Scroll down to Case Notifications and click "Edit." You can also request the support engineers to enable/disable the case notifications by responding to the case, and they will do it for you. Video Tutorials How to open a support case How to navigate and view filed support cases   Additional Resources How do I manage my support cases? AppDynamics Support migration to Cisco CSM
Guide to Monitoring AWS Athena Performance with AppDynamics Amazon Athena is a serverless, interactive analytics service that provides a simplified and flexible way to analyze petabytes of da... See more...
Guide to Monitoring AWS Athena Performance with AppDynamics Amazon Athena is a serverless, interactive analytics service that provides a simplified and flexible way to analyze petabytes of data where it lives. It is an important tool for analyzing existing data. AppDynamics is one of the best monitoring solutions that gives you insight into applications and infrastructure. Let’s dive into our setup. Prerequisites Machine Agent installed on any Linux box The Linux box should have permission to fetch CloudWatch metrics Setting up Amazon Athena Set up Amazon Athena If you all have Athena set up, then scroll down. If not, follow the below steps: Create an S3 bucket where we will save Athena query results. I created one called “athena-query-result-abhi” Setting Up Query Result Location Click “Edit settings” in the Athena console Specify your S3 bucket: Enter s3://athena-query-result-abhi as the query result location Save the settings Enable Amazon Athena to publish query metrics to AWS CloudWatch Edit the WorkGroup your Amazon Athena is part of and select “Publish query metrics to AWS CloudWatch” Running the Sample Queries In the Athena console, run the following queries to create a sample database and table: Create the Database: CREATE DATABASE sampledb;​ Create a Sample Table with some inline data: CREATE TABLE sampledb.sampletable AS SELECT 'value1' AS col1, 'value2' AS col2, 'value3' AS col3 UNION ALL SELECT 'value4' AS col1, 'value5' AS col2, 'value6' AS col3;​ Run a Sample Query to generate activity: SELECT * FROM sampledb.sampletable LIMIT 10;​ Running Queries to Generate Metrics: Execute the following queries to generate sufficient activity and metrics: SELECT * FROM sampledb.sampletable LIMIT 10; SELECT col1, COUNT(*) FROM sampledb.sampletable GROUP BY col1; SELECT COUNT(*) FROM sampledb.sampletable WHERE col2 = 'value2'; SELECT col1, col2 FROM sampledb.sampletable WHERE col3 = 'value3';​ Great work, your Athena is all set up     Machine Agent Now, let’s work on the machine agent side. SSH inside the box where your Machine Agent is running In the Machine Agent Home folder, Go to the Monitors folder and create a directory called DynamoDb. In my case I used, MA_HOME = /opt/appdynamics/ma cd /opt/appdynamics/ma/monitors mkdir Athena​ Inside the Athena folder, create a file called script.sh with the below content: NOTE: Please edit REGION and START_TIME, END_TIME if required #!/bin/bash # List of all metrics you want to fetch for Athena declare -a METRICS=("DPUAllocated" "DPUConsumed" "DPUCount" "EngineExecutionTime" "ProcessedBytes" "QueryPlanningTime" "QueryQueueTime" "ServicePreProcessingTime" "ServiceProcessingTime" "TotalExecutionTime") # Define the time period (in ISO8601 format) START_TIME=$(date --date='48 hours ago' --utc +%Y-%m-%dT%H:%M:%SZ) END_TIME=$(date --utc +%Y-%m-%dT%H:%M:%SZ) # AWS region REGION="us-east-1" # Fetch all workgroups WORKGROUPS=$(aws athena list-work-groups --region $REGION --query 'WorkGroups[*].Name' --output text) # Loop through each workgroup and fetch the metrics for WORKGROUP in $WORKGROUPS; do # Loop through each metric and fetch the data for METRIC_NAME in "${METRICS[@]}"; do # Fetch the metric data using AWS CLI METRIC_VALUE=$(aws cloudwatch get-metric-statistics --region $REGION --namespace AWS/Athena \ --metric-name $METRIC_NAME \ --dimensions Name=QueryState,Value=SUCCEEDED Name=QueryType,Value=DML Name=WorkGroup,Value=$WORKGROUP \ --start-time $START_TIME \ --end-time $END_TIME \ --period 300 \ --statistics Sum \ --query 'Datapoints | sort_by(@, &Timestamp)[-1].Sum' \ --output text) # If metric value is empty, set to 0, else format it as an integer if [ -z "$METRIC_VALUE" ]; then METRIC_VALUE="0" else METRIC_VALUE=$(echo $METRIC_VALUE | awk '{if($1+0==$1){print int($1)}else{print "0"}}') fi # Echo the metric in the specified format echo "name=Custom Metrics|Athena|$WORKGROUP|$METRIC_NAME,value=$METRIC_VALUE" done done Create another file called monitor.xml with the below content: <monitor> <name>Athena monitoring</name> <type>managed</type> <description>Athena monitoring</description> <monitor-configuration> </monitor-configuration> <monitor-run-task> <execution-style>periodic</execution-style> <name>Run</name> <type>executable</type> <task-arguments> </task-arguments> <executable-task> <type>file</type> <file>script.sh</file> </executable-task> </monitor-run-task> </monitor>​ Let’s restart your Machine Agent Once you are done, you will be able to see your Athena metrics in the AppDynamics Machine Agent’s metric browser, as seen below
Pre-requisite Machine Agent installed on any Linux box that has access to the DynamoDb Database Linux box should have permission to fetch CloudWatch metrics Installation If your Linux B... See more...
Pre-requisite Machine Agent installed on any Linux box that has access to the DynamoDb Database Linux box should have permission to fetch CloudWatch metrics Installation If your Linux Box is on EC2, then please select the IAM Role associated with that EC2 instance and add “CloudWatchFullAccess” permission. Once done, SSH inside the box where your Machine Agent is running.  In the Machine Agent Home folder, go to the Monitors folder and create a directory called "DynamoDb". In my case, MA_HOME = /opt/appdynamics/ma cd /opt/appdynamics/ma/monitors mkdir DynamoDb​ Inside the 'DynamoDb' folder, create a file called script.sh with the below content: NOTE: Please edit TABLE_NAME and AWS_REGION with your desired TABLE_NAME and AWS_REGION #!/bin/bash # Define the DynamoDB table name TABLE_NAME="aws-lambda-standalone-dynamodb" # Define your AWS region AWS_REGION="us-west-2" # Change this to your region # List of all metrics you want to fetch declare -a METRICS=("ConsumedReadCapacityUnits" "ConsumedWriteCapacityUnits" "ProvisionedReadCapacityUnits" "ProvisionedWriteCapacityUnits" "ReadThrottleEvents" "WriteThrottleEvents" "UserErrors" "SystemErrors" "ConditionalCheckFailedRequests" "SuccessfulRequestLatency" "ReturnedItemCount" "ReturnedBytes" "ReturnedRecordsCount") # Define the time period (in ISO8601 format) START_TIME=$(date --date='60 minutes ago' --utc +%Y-%m-%dT%H:%M:%SZ) END_TIME=$(date --utc +%Y-%m-%dT%H:%M:%SZ) # Loop through each metric and fetch the data for METRIC_NAME in "${METRICS[@]}" do # Fetch the metric data using AWS CLI METRIC_VALUE=$(aws cloudwatch get-metric-statistics --namespace AWS/DynamoDB \ --metric-name $METRIC_NAME \ --dimensions Name=TableName,Value=$TABLE_NAME \ --start-time "$START_TIME" \ --end-time "$END_TIME" \ --period 3600 \ --statistics Average \ --query 'Datapoints[0].Average' \ --output text \ --region $AWS_REGION) # Check if metric value is 'None' or empty if [ "$METRIC_VALUE" == "None" ] || [ -z "$METRIC_VALUE" ]; then METRIC_VALUE="0" else # Round the metric value to the nearest whole number METRIC_VALUE=$(printf "%.0f" "$METRIC_VALUE") fi # Echo the metric in the specified format echo "name=Custom Metrics|DynamoDB|$TABLE_NAME|$METRIC_NAME,value=$METRIC_VALUE" done​ If you have multiple tables, then use the script below: #!/bin/bash # List of DynamoDB table names declare -a TABLE_NAMES=("Table1" "Table2" "Table3") # Add your table names here # Define your AWS region AWS_REGION="us-west-2" # Change this to your region # List of all metrics you want to fetch declare -a METRICS=("ConsumedReadCapacityUnits" "ConsumedWriteCapacityUnits" "ProvisionedReadCapacityUnits" "ProvisionedWriteCapacityUnits" "ReadThrottleEvents" "WriteThrottleEvents" "UserErrors" "SystemErrors" "ConditionalCheckFailedRequests" "SuccessfulRequestLatency" "ReturnedItemCount" "ReturnedBytes" "ReturnedRecordsCount") # Define the time period (in ISO8601 format) START_TIME=$(date --date='60 minutes ago' --utc +%Y-%m-%dT%H:%M:%SZ) END_TIME=$(date --utc +%Y-%m-%dT%H:%M:%SZ) # Loop through each table for TABLE_NAME in "${TABLE_NAMES[@]}" do # Loop through each metric and fetch the data for the current table for METRIC_NAME in "${METRICS[@]}" do # Fetch the metric data using AWS CLI METRIC_VALUE=$(aws cloudwatch get-metric-statistics --namespace AWS/DynamoDB \ --metric-name $METRIC_NAME \ --dimensions Name=TableName,Value=$TABLE_NAME \ --start-time "$START_TIME" \ --end-time "$END_TIME" \ --period 3600 \ --statistics Average \ --query 'Datapoints[0].Average' \ --output text \ --region $AWS_REGION) # Check if metric value is 'None' or empty if [ "$METRIC_VALUE" == "None" ] || [ -z "$METRIC_VALUE" ]; then METRIC_VALUE="0" else # Round the metric value to the nearest whole number METRIC_VALUE=$(printf "%.0f" "$METRIC_VALUE") fi # Echo the metric in the specified format echo "name=Custom Metrics|DynamoDB|$TABLE_NAME|$METRIC_NAME,value=$METRIC_VALUE" done done Great. Create another file called monitor.xml with the below content: <monitor> <name>DynamoDb monitoring</name> <type>managed</type> <description>DynamoDb monitoring</description> <monitor-configuration> </monitor-configuration> <monitor-run-task> <execution-style>periodic</execution-style> <name>Run</name> <type>executable</type> <task-arguments> </task-arguments> <executable-task> <type>file</type> <file>script.sh</file> </executable-task> </monitor-run-task> </monitor>​ Great work!! Now, Llet’s restart your Machine Agent Once you are done, you will be able to see your DynamoDB metrics in the AppDynamics Machine Agent’s metric browser
How EUM Pageview Calculation works As per the license entitlement page https://docs.appdynamics.com/appd/22.x/22.12/en/appdynamics-licensing/license-entitlements-and-restrictions Pageview... See more...
How EUM Pageview Calculation works As per the license entitlement page https://docs.appdynamics.com/appd/22.x/22.12/en/appdynamics-licensing/license-entitlements-and-restrictions Pageview: A Pageview is an instance of a base page, virtual page, or iFrame loaded by a web browser. Each base page view, iframe view, and virtual page view is counted as a Pageview. Repeated views of one page are counted as separate Pageviews.  This doesn't include the Ajax request count. Ajax requests will be counted if it's included for analytics tracking every 5 requests sent to the analytics will be counted as 1 pageview So when Ajax was configured to be sent to event service the pageview calculation includes ajax(i.e) 1 pageview per every 5 Ajax requests sent to the analytics
1. Pre-requisite We will use strimzi for deploying a Kafka Cluster. Before deploying the Strimzi cluster operator, create a namespace called  kafka : kubectl create namespace kafka Apply ... See more...
1. Pre-requisite We will use strimzi for deploying a Kafka Cluster. Before deploying the Strimzi cluster operator, create a namespace called  kafka : kubectl create namespace kafka Apply the Strimzi install files, including  ClusterRoles ,  ClusterRoleBindings  and some Custom Resource Definitions ( CRDs ). The CRDs define the schemas used for the custom resources (CRs, such as  Kafka ,  KafkaTopic  and so on) you will be using to manage Kafka clusters, topics and users. kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka Follow the deployment of the Strimzi cluster operator: kubectl get pod -n kafka --watch You can also follow the operator’s log: kubectl logs deployment/strimzi-cluster-operator -n kafka -f 2. Create an Apache Kafka cluster Create a new Kafka custom resource to get a small persistent Apache Kafka Cluster with one node for Apache Zookeeper and Apache Kafka: # Apply the `Kafka` Cluster CR file kubectl apply -f https://strimzi.io/examples/latest/kafka/kafka-persistent-single.yaml -n kafka Wait while Kubernetes starts the required pods, services, and so on: kubectl wait kafka/my-cluster --for=condition=Ready --timeout=300s -n kafka Send and receive messages With the cluster running, run a simple producer to send messages to a Kafka topic (the topic is automatically created): kubectl -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.40.0-kafka-3.7.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic Once everything is set up correctly, you’ll see a prompt where you can type in your messages: If you don't see a command prompt, try pressing enter. >Hello >I am Abhi To receive them in a different terminal, run: kubectl -n kafka run kafka-consumer -ti --image=quay.io/strimzi/kafka:0.40.0-kafka-3.7.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning If everything works as expected, you’ll be able to see the message you produced in the previous step: If you don't see a command prompt, try pressing enter. >Hello >I am Abhi Our Cluster is ready. Now let’s set up Prometheus Kafka exporter to emit out metrics related to the Kafka cluster. 3. Prometheus Kafka exporter We will use prometheus-community helm charts. Let’s add the repo helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm repo update Create a file called kafka_helm_values.yaml with service discovery config kafkaServer: - my-cluster-kafka-bootstrap:9092 annotations: prometheus.io/scrape: "true" prometheus.io/path: "/metrics" prometheus.io/port: "9308" service: type: ClusterIP port: 9308 labels: clustername: my-cluster annotations: prometheus.io/scrape: "true" prometheus.io/path: "/metrics" prometheus.io/port: "9308" To deploy, Run-> helm install --namespace kafka prom-kafka-expo-ns prometheus-community/prometheus-kafka-exporter -f kafka_helm_values.yaml If you changed the clustername then please edit above yaml file. To view metrics http://localhost:9308/metrics export POD_NAME=$(kubectl get pods --namespace kafka -l "app=prometheus-kafka-exporter,release=prom-kafka-expo-ns" -o jsonpath="{.items[0].metadata.name}") echo $POD_NAME kubectl port-forward $POD_NAME 9308:9308 -n kafka 4. Instrumenting with Cisco Cloud Observability Make sure you have deployed Cisco Cloud Observability as mentioned here: https://docs.appdynamics.com/observability/cisco-cloud-observability/en/kubernetes-and-app-service-monitoring/install-kubernetes-and-app-service-monitoring If yes, then follow below-> Edit your kafka_helm_values.yaml file, And add below-> kafkaServer: - my-cluster-kafka-bootstrap:9092 annotations: prometheus.io/scrape: "true" prometheus.io/path: "/metrics" prometheus.io/port: "9308" appdynamics.com/exporter_type: "kafka" appdynamics.com/kafka_cluster_name: "my-cluster" service: type: ClusterIP port: 9308 labels: clustername: my-cluster annotations: prometheus.io/scrape: "true" prometheus.io/path: "/metrics" prometheus.io/port: "9308" appdynamics.com/exporter_type: "kafka" appdynamics.com/kafka_cluster_name: "my-cluster" Afterward, run helm upgrade --namespace kafka prom-kafka-expo-ns prometheus-community/prometheus-kafka-exporter -f kafka_helm_values.yaml Wait about 5 minutes and check the logs of your OpenTelemetry Collector. kubectl -n cco logs appdynamics-collectors-ss-appdynamics-otel-collector-0 | grep kafka -> k8s.namespace.name: Str(kafka) -> k8s.namespace.name: Str(kafka) -> k8s.namespace.name: Str(kafka) -> k8s.pod.labels: Map({"run":"kafka-consumer"}) -> k8s.pod.name: Str(kafka-consumer) -> k8s.pod.containernames: Slice(["kafka-consumer"]) -> k8s.namespace.name: Str(kafka) -> k8s.namespace.name: Str(kafka) -> k8s.namespace.name: Str(kafka) -> k8s.pod.containernames: Slice(["kafka-producer"]) -> k8s.pod.labels: Map({"run":"kafka-producer"}) -> k8s.pod.name: Str(kafka-producer) logger.kafka.name = org.apache.kafka logger.kafka.level = WARN -> k8s.namespace.name: Str(kafka) -> k8s.service.selector: Map({"strimzi.io/cluster":"my-cluster","strimzi.io/kind":"Kafka","strimzi.io/name":"my-cluster-kafka"}) -> k8s.service.labels: Map({"app.kubernetes.io/instance":"my-cluster","app.kubernetes.io/managed-by":"strimzi-cluster-operator","app.kubernetes.io/name":"kafka","app.kubernetes.io/part-of":"strimzi-my-cluster","strimzi.io/cluster":"my-cluster","strimzi.io/component-type":"kafka","strimzi.io/kind":"Kafka","strimzi.io/name":"my-cluster-kafka"}) -> k8s.service.name: Str(my-cluster-kafka-brokers) -> k8s.namespace.name: Str(kafka) -> k8s.secret.labels: Map({"modifiedAt":"1714427393","name":"prom-kafka-expo-ns","owner":"helm","status":"superseded","version":"1"}) -> k8s.namespace.name: Str(kafka) -> k8s.secret.name: Str(sh.helm.release.v1.prom-kafka-expo-ns.v1) -> k8s.namespace.name: Str(kafka) On the UI, scroll to Observe -> Kafka Clusters You should see the Topic listed.  Great work everyone!!  
How do you get Cluster Capacity (pods)? In the image below we see Cluster Capacity is being met. How can I get ClusterCapacity? Use the command just below kubectl get nodes -o json... See more...
How do you get Cluster Capacity (pods)? In the image below we see Cluster Capacity is being met. How can I get ClusterCapacity? Use the command just below kubectl get nodes -o jsonpath='{.items[*].status.capacity.pods}' | tr -s ' ' '\n' | a How it works 1. Check the Maximum Pods per Node First, you need to determine the maximum number of pods that each node can support. This can typically be found in the node’s capacity information: kubectl get nodes -o jsonpath='{.items[*].status.capacity.pods}' This command will print the maximum number of pods for each node in your cluster. 2. Count the Number of Nodes Next, count the total number of nodes in your cluster: kubectl get nodes --no-headers | wc -l 3. Calculate Total Pod Capacity You can then multiply the average (or minimum if there is a large disparity) maximum pods per node by the total number of nodes to get an estimate of the total pod capacity of the cluster. If you want a single command to fetch the total pod capacity based on the current node configurations and assuming uniform configuration across nodes, you could use the following command chain: kubectl get nodes -o jsonpath='{.items[*].status.capacity.pods}' | tr -s ' ' '\n' | a In this case, abhibaj@ABHIBAJ-M-2KXL KubernetesInstallation % kubectl get nodes -o jsonpath='{.items[*].status.capacity.pods}' | tr -s ' ' '\n' | awk '{sum += $1} END {print sum}' 116 The same can be seen on the ClusterAgent page.   
Detect and score application vulnerabilities Video Length: 3 min 50 seconds    CONTENTS | Introduction | Video |Resources | About the presenter  In this deep-dive video, Adam Smye-Rum... See more...
Detect and score application vulnerabilities Video Length: 3 min 50 seconds    CONTENTS | Introduction | Video |Resources | About the presenter  In this deep-dive video, Adam Smye-Rumsby explores how Cisco Secure Application detects application vulnerabilities and their potential impact on the business organizations need to identify and prioritize security threats based on risk scores derived from understanding business impact. Cisco Secure Application breaks down complex business transactions, like those in a shopping cart system, into understandable data, exposing the vulnerabilities and helping you prioritize defense strategies against significant security incidents.     Additional Resources  Protecting what matters most with business risk observability  Learn more about Application Security Monitoring on the product page.  About presenter Adam J. Smye-Rumsby Adam Smye-Rumsby, Solutions Engineer Adam J. Smye-Rumsby joined AppDynamics as a Senior Sales Engineer in 2018, after nearly 16 years with IBM across a variety of roles including over five years as a Senior Sales Engineer in the Digital Experience & Collaboration business unit. Since then, he has helped dozens of enterprises and commercial customers improve the maturity of their application monitoring practices.    More recently, Adam has taken on the challenge of developing subject-matter expertise in the application security market. He has contributed to two published books on the use of Java technology and holds patents in AI/ML, Collab, VR, and other technology areas. Reach out to Adam to learn more about how AppDynamics is helping Cisco customers secure their applications in an ever-changing threat landscape.
In this Knowledge Base Article, we’ll walk you through the process of collecting Prometheus metrics from a Python application, forwarding them to Cisco Cloud Observability platform using OpenTelemetr... See more...
In this Knowledge Base Article, we’ll walk you through the process of collecting Prometheus metrics from a Python application, forwarding them to Cisco Cloud Observability platform using OpenTelemetry, and visualizing them for effective monitoring. Do NOTE : Cisco Cloud Observability has been depreciated in favor or Splunk, So if you are not an existing customer that has been onboarded to Cisco Cloud Observability, This article is not for you. Setting up the Python Application Let’s start with creating a Python application that generates Prometheus metrics. We’ll use the  prometheus_client  library to create and expose these metrics. If you haven't installed the library, you can do so with: pip3 install prometheus_client Now, let’s dive into the Python script: import random import time from prometheus_client import start_http_server, Counter, Summary # Define Prometheus metrics some_counter = Counter(name="myapp_some_counter_total", documentation="Sample counter") request_latency = Summary(name="myapp_request_latency_seconds", documentation="Request latency in seconds") def main() -> None: start_http_server(port=9090) while True: try: # Simulate application logic here process_request() time.sleep(5) # Sleep for a few seconds between metric updates except KeyboardInterrupt: break def process_request(): # Simulate processing a request and record metrics with request_latency.time(): random_sleep_time = random.uniform(0.1, 0.5) time.sleep(random_sleep_time) some_counter.inc() if __name__ == "__main__": main() This Python script sets up a simple HTTP server on port 9090 and generates two Prometheus metrics:  myapp_some_counter_total  and  myapp_request_latency_seconds . To produce the load: curl -v http://localhost:9090 The logs will look like: * Trying 127.0.0.1:8081... * Connected to localhost (127.0.0.1) port 8081 (#0) > GET / HTTP/1.1 > Host: localhost:8081 > User-Agent: curl/7.81.0 > Accept: */* > * Mark bundle as not supporting multiuse * HTTP 1.0, assume close after body < HTTP/1.0 200 OK < Date: Thu, 07 Dec 2023 15:58:00 GMT < Server: WSGIServer/0.2 CPython/3.10.12 < Content-Type: text/plain; version=0.0.4; charset=utf-8 < Content-Length: 2527 < # HELP python_gc_objects_collected_total Objects collected during gc # TYPE python_gc_objects_collected_total counter python_gc_objects_collected_total{generation="0"} 371.0 python_gc_objects_collected_total{generation="1"} 33.0 python_gc_objects_collected_total{generation="2"} 0.0 # HELP python_gc_objects_uncollectable_total Uncollectable objects found during GC # TYPE python_gc_objects_uncollectable_total counter python_gc_objects_uncollectable_total{generation="0"} 0.0 python_gc_objects_uncollectable_total{generation="1"} 0.0 python_gc_objects_uncollectable_total{generation="2"} 0.0 # HELP python_gc_collections_total Number of times this generation was collected # TYPE python_gc_collections_total counter python_gc_collections_total{generation="0"} 40.0 python_gc_collections_total{generation="1"} 3.0 python_gc_collections_total{generation="2"} 0.0 Deploying OpenTelemetry Collector To collect and forward metrics to Cisco Cloud Observability, we’ll use OpenTelemetry Collector. This component plays a vital role in gathering metrics from various sources and exporting them to different backends. In this case, we’ll configure it to forward metrics to AppDynamics. Installing OpenTelemetry Collector on Ubuntu Make sure you’re on an Ubuntu machine. If not, adjust the installation instructions accordingly. Install OpenTelemetry Collector: https://medium.com/@abhimanyubajaj98/linux-host-monitoring-with-appdynamics-deploying-opentelemetry-collector-via-terraform-a6971f02c0b2 For this tutorial, we will edit /opt/appdynamics/appdynamics.conf and add another variable called: APPD_OTELCOL_EXTRA_CONFIG=--config=file:/opt/appdynamics/config.yaml​ Our appdynamics.conf file will look like: APPD_OTELCOL_CLIENT_ID=<client-id> APPD_OTELCOL_CLIENT_SECRET=<client-secret> APPD_OTELCOL_TOKEN_URL=<tenant-url> APPD_OTELCOL_ENDPOINT_URL=<tenant_endpoint> APPD_LOGCOL_COLLECTORS_LOGGING_ENABLED=true APPD_OTELCOL_EXTRA_CONFIG=--config=file:/opt/appdynamics/config.yaml We will create the config.yaml like below. Configuring OpenTelemetry Collector Create an  config.yaml  configuration file with the following content: extensions: oauth2client: client_id: "${env:APPD_OTELCOL_CLIENT_ID}" client_secret: "${env:APPD_OTELCOL_CLIENT_SECRET}" token_url: "${env:APPD_OTELCOL_TOKEN_URL}" health_check: endpoint: 0.0.0.0:13133 zpages: endpoint: 0.0.0.0:55679 receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318 prometheus: config: scrape_configs: - job_name: 'prometheus' static_configs: - targets: ['localhost:9090'] processors: # defaults based on perf testing for k8s nodes batch: send_batch_size: 1000 timeout: 10s send_batch_max_size: 1000 memory_limiter: check_interval: 5s limit_mib: 1536 exporters: otlphttp: retry_on_failure: max_elapsed_time: 180 metrics_endpoint: "${env:APPD_OTELCOL_ENDPOINT_URL}/v1/metrics" traces_endpoint: "${env:APPD_OTELCOL_ENDPOINT_URL}/v1/trace" logs_endpoint: "${env:APPD_OTELCOL_ENDPOINT_URL}/v1/logs" auth: authenticator: oauth2client service: telemetry: logs: level: debug extensions: [zpages, health_check, oauth2client] pipelines: metrics: receivers: [prometheus, otlp] processors: [memory_limiter, batch] exporters: [otlphttp] traces: receivers: [otlp] processors: [memory_limiter, batch] exporters: [otlphttp] In this configuration file, we have set up the OpenTelemetry Collector to receive metrics from the Prometheus receiver and export them to Cisco Cloud Observability using the OTLP exporter. Forwarding Metrics to Cisco Cloud Observability With the OpenTelemetry Collector configured, it will now collect metrics from your Python application and forward them to Cisco Cloud Observability. This seamless integration enables you to monitor your application’s performance in real-time. Monitoring Metrics in AppDynamics You can use UQL to query the metrics. This is a very basic example, you can create attributes. To learn more about the AppDynamics metric model, check out this AppDynamics Docs page
Installing and Configuring the Cisco AppDynamics Smart Agent and Machine Agent on Ubuntu Linux" Download The Smart Agent from Go to download.appdynamics.com Click on the dropdown 'Type'... See more...
Installing and Configuring the Cisco AppDynamics Smart Agent and Machine Agent on Ubuntu Linux" Download The Smart Agent from Go to download.appdynamics.com Click on the dropdown 'Type' and find AppDynamics Smart Agent for Linux ZIP You can curl the download as well to your Linux box. AppDynamics Smart Agent requires pip3 to be present and appdsmartagent folder should be where we will install smart agent, So if you don’t have it. Please run the below-> mkdir /opt/appdynamics/appdsmartagent sudo apt update sudo apt install -y python3-pip cd /opt/appdynamics/appdsmartagent​ Once you have these setup, curl the Zip artifactory curl -L -O -H "Authorization: Bearer xxxxx.xxxx.xxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxx;" "https://download.appdynamics.com/download/prox/download-file/appdsmartagent/24.2.0.1472/appdsmartagent_64_linux_24.2.0.1472.zip" ​ Unzip the content and run install-script.sh, In this case unzip appdsmartagent_64_linux_24.2.0.1472.zip unzip appdsmartagent_64_linux_24.2.0.1472.zip​ Run the install script placed in the appdsmartagent folder ./install-script.sh​ Add the configuration inside of /opt/appdynamics/appdsmartagent/config.ini vi /opt/appdynamics/appdsmartagent/config.ini​ You are required to configure Smart Agents to register with the Controller. Edit the  /opt/appdynamics/appdsmartagent/config.ini  file for the required Smart Agent configuration. Ensure that you update the following parameters: ControllerURL : The URL of the Controller on which you want to establish the connection with the Smart Agent. ControllerPort  : FMServicePort:  The port to which the Smart Agent connects to the FM service (Agent Management). It is 8090 for an on-premises Controller and 443 for a SaaS Controller. AccountAccessKey : The account access key on the Controller. AccountName : The account name on the Controller to which the Smart Agent will report. An example for above: ControllerURL = https://support-controller.saas.appdynamics.com ControllerPort = 443 FMServicePort = 443 AccountAccessKey = abcd-ahsasasj-asbasas AccountName = controllerces Once done. Start the Smart Agent with the below: systemctl start smartagent.service You can check the status  The first part is done. Now let’s install a Machine agent. Once your Agent is started Please go to controller.com/controller/#/apm/agent-management-install-agents On the Install Agents page, where it says "Select Agent Attributes' select 'Machine' Under 'Select where to Deploy Agents'  Add your Host where you installed Smart Agent and move to the left Click 'Apply' and then click 'Done Once this is done, you can install it with DEFAULT config, or you can set Customer Agent attributes.  If you wish to pass more information, you can get the key : value example from Cisco AppDynamics Docs: Ansible Configuration for Machine Agent.  For example: controller_account_access_key: "123key"   NOTE: Your controller-host, port, access-key, and accountName are already passed. You can select SIM (Server Visibility) and SSL enabled based on your needs. I am marking JRE Support enabled as well.  Ansible Configuration for Machine Agent In Linux, the Ansible role starts and runs the Machine agent as a service during installation, upgrade, and… Once done, the agent will take 10–15 minutes to show up on your Controller. It will be automatically installed and be in a running state. Logs can be located at /opt/appdynamics/machine-agent folder.  
Executive Summary  AppDynamics is joining with Cisco to provide user identity (sign-in credentials) capabilities for all SAAS AppDynamics-based products and services.  Users whose passwords are ver... See more...
Executive Summary  AppDynamics is joining with Cisco to provide user identity (sign-in credentials) capabilities for all SAAS AppDynamics-based products and services.  Users whose passwords are verified by the AppDynamics Identity Platform (not user accounts that sign in using their company’s SSO credentials) will be moved to the Cisco Customer Identity platform (id.cisco.com) for verification. This will be a minimal-impact change and should not affect a user’s access to Cisco AppDynamics and Observability Platform products.   In this article Overview Overview Impact The following user accounts will be directed to id.cisco.com for their password entry The following user accounts will NOT be impacted by this change Understanding the identity change What do you mean by identity? How is it changing? What will be the impact on my user account? How can I tell if I am affected? What will I need to do? How is this better? Do I have a choice? Navigating the transition Could I be blocked from access to AppDynamics? Why do I need to reset my password? How do I set a new password for my account? What if I use a password manager? Will I still have SSO between various AppDynamics Products and Services? Why do I see the AppDynamics password page and not the Cisco password page? I do not see a "Forgot password" link, only a "Reset Password" link. Why? What if I am a user of AppDynamics Support? What if I sign in using my company's SSO credentials today? What if I am only an AppDynamics On-Premise product user? Cisco Identity migration support Reset your password Still having issues? For AppDynamics Product Issues Overview  Cisco’s Customer Identity platform and AppDynamics’ Customer Identity both serve customer identity needs. However, AppDynamics Customer Identity is limited to AppDynamics-based products only. To consolidate user accounts and provide a seamless experience with other Cisco products, AppDynamics user identities will be moved to the Cisco Customer Identity platform.   Starting in May 2024, users added to AppDynamics SaaS products via the Accounts Management Portal and Controller will have their identities stored and verified using Cisco Customer Identity. This only affects “local” users —user accounts that are not using their company’s SSO credentials through a federation with AppDynamics.   These new local users will still receive a welcome email with getting started instructions as usual. They will still set up their password and provide user profile data. However, that information will be stored and verified by Cisco’s Customer Identity platform instead of the AppDynamics Customer Identity platform. When these users enter their email on the AppDynamics login page and click Next, they will enter their password on the id.cisco.com login page instead of the login.appdynamics.com login page.   At the same time, we will move existing user identities in the AppDynamics Identity platform over to the Cisco Identity platform. Cisco will then store and verify the user identity information going forward. These existing users will receive an email when their account has moved instructing them to set a new password to continue.   If you sign in using your company’s SSO credentials through a federation, there will be no change to your access to AppDynamics products.   Impact The following user accounts will be directed to id.cisco.com for their password entry:  AppDynamics SAAS users who currently sign in using the AppDynamics Identity platform using their email address as a username  Cisco Observability Platform and Cisco Cloud Observability (COP and CCO) users who currently sign in using the AppDynamics Identity platform using their email address as a username   Accounts Management Portal users who currently sign in using the AppDynamics Identity platform using their email address as a username  The following user accounts will NOT be impacted by this change:  AppDynamics On-premises users  AppDynamics SAAS users who currently sign in using a legacy “local” user (a user that does not use email address as their username)   AppDynamics SAAS users who sign in to their AppDynamics SaaS tenant using their company credentials (SSO) through federation   Cisco Observability Platform and Cisco Cloud Observability (COP and CCO) users who currently sign in using their company credentials through federation  Accounts Management Portal users who currently sign in using their company credentials through federation.  Understanding the identity change  What do you mean by identity?  Identity is how your user account is verified for use at a service, like when you sign in to AppDynamics. Typically, this is in the form of a set of credentials like username and password. AppDynamics and Cisco use email addresses as usernames. With this change, users whose email and password have been verified by AppDynamics at login.appdynamics.com or login.fso.cisco.com will now be verified by id.cisco.com instead.   How is it changing?  User identities stored within the AppDynamics Identity platform will be moved to the Cisco Identity platform. Existing users will need to set up a new password within the Cisco Identity system – we will send an email with instructions when this is ready to be completed.   When a user accesses AppDynamics or Cisco Observability Platform, they will be prompted to sign in using their email. After clicking Next, they will be directed to Cisco for password verification, then be taken into the requested product.   What will be the impact on my user account?  There are two impacts to your user account:  You will need to set up a new password for your email address.  Your login process will include a stop at id.cisco.com for password entry and verification.  That is really it. Access will remain unchanged. The most important change is where your password lives.  How can I tell if I am affected?  Go to https://login.appdynamics.com and enter your email address as username, then click Next If you see a password field, your user account is impacted If you enter your email address, click Next and land in your company sign-in screen, you are not impacted.   You can also check with your company admin to see if your user account is “Authenticated by AppDynamics.” If it is, you will be impacted.  What will I need to do?  Not too much – all you need to do is set up a new password with Cisco Customer Identity. Keep an eye out for an email in May 2024 indicating that your account now utilizes Cisco Identity and follow the instructions to set up a password.   If you already have a Cisco Identity that you use with other Cisco products, then you will just start using that identity (the same password!) when signing in to AppDynamics products.  How is this better?  AppDynamics services, such as Community, University (Now Cisco U.), and Support have either already moved to Cisco equivalents or will be moving this year.  All these services require a Cisco account. This move will ensure that you can access AppDynamics SAAS products, Cisco Observability Platform products, and all these services with the same email and password.  You will have single sign-on to all these capabilities, as well as any other Cisco products you use that are part of the Cisco Identity platform.  Do I have a choice?  No, for ongoing security and convenience, these identities will be moving as Cisco is the new home for all such identities.   Navigating the transition Could I be blocked from access to AppDynamics?  Yes, it is possible if your account is affiliated with embargoed countries. Cisco enforces global trade compliance, so some user accounts may be blocked to comply with regulations. If your user account utilizes email belonging to an embargoed country, your account will be blocked, and product access will be lost.  If you are not part of an embargoed country and in the unlikely event that your account is put on hold, you will receive an email from Cisco with instructions to request a release.  Why do I need to reset my password?  Your password will not be moved to the Cisco Identity platform from the AppDynamics identity platform.  Passwords are secure when they are stored at both Cisco and AppDynamics, protecting them such that they can’t be preserved in a usable way between systems.  The first time you try to sign in after being notified that your account is now authenticated by Cisco, you will need to simply use the Forgot Password? Flow to set a new one.   How do I set a new password for my account?  There are two ways:  Use https://id.cisco.com directly.   Start with signing in to AppDynamics.  1. Using id.cisco.com directly:  Using your browser, navigate to https://id.cisco.com Enter your email address and click Next At the bottom of the password page, you will see the Forgot password? link (see figure 1 below) Follow the instructions and complete the process Upon completion, you will be logged in to Cisco and be presented with your Cisco profile page.    2. Start with signing in to AppDynamics:   Using your browser, navigate to an AppDynamics product, like your Cisco Observability Platform tenant or your AppDynamics CSaaS tenant, or even https://accounts.appdynamics.com Enter your email address and click Next You will be redirected to the id.cisco.com password page At the bottom of the password page, you will see the “Forgot password?” link (see figure 1, below) Follow the instructions and complete the process Upon completion, you will be logged in to both Cisco and AppDynamics and be presented with the original AppDynamics product you were trying to sign in to.  Figure 1, Cisco password page   What if I use a password manager?  Do you already have a Cisco user account to sign in at id.cisco.com? If yes, then your password manager will work if you have stored your Cisco password, and you will not need to set up a new password.   However, if you do not have a Cisco account, and since password managers typically work based on the sign in domain, it will not be able to recall a password for id.cisco.com. You will want to use your password manager features when you set your new password for id.cisco.com.  Will I still have SSO between various AppDynamics Product and Services?  Yes, you can still move between AppDynamics SaaS tenants, Cisco Observability Platform tenants, and Accounts Management Portal using SSO. In fact, you will also gain SSO into Cisco products and services as well.  Why do I see the AppDynamics password page and not the Cisco password page?   If you still see the AppDynamics password page (see figure 2 below) when you enter your email address at login.appdynamics.com and click Next, it is because we have not yet moved your account to Cisco. You can continue to login using your existing AppDynamics password until you receive a notification that we have transitioned your account to Cisco.  Figure 2, AppDynamics password page I do not see a “Forgot password?” link, only a “Reset Password” link. Why?  If you see the “Reset Password” link after entering your email and clicking next (see Figure 2, above), you are still on the AppDynamics sign-in screen. This is because we have not yet moved your account to Cisco. You can continue to sign in using your existing AppDynamics password until you receive a notification that we have transitioned your account to Cisco.    What if I am a user of AppDynamics support?  It happens that AppDynamics Support is moving to Cisco Support at or around the same time as the user migration. Once the support process changes, your only means of logging into support will be through Cisco identity. This identity change will facilitate your use of Cisco based support tools later. One change will be that your user profile will be required to include a physical address. On signing in to the support system, you will be prompted to establish an address in your personal profile. A separate email will be sent to Support users with more information about this change.   What if I sign in using my company’s SSO credentials today?  You will continue to use your company’s SSO credentials to access AppDynamics products. However, services like University (Cisco U.), and Support will require a Cisco identity.  As part of our migration, we will create accounts for you in Cisco and they will be waiting for you to use.  Simply go to https://id.cisco.com, enter your email and use the "Forgot password?” link to set a password. If your company admin has federated with Cisco directly, then your company credentials will work there as well.    What if I am only an AppDynamics On-Premises product user?  Your product access to AppDynamics On-Premises will remain unchanged, signing in with whatever credentials you use today. However, to have used AppDynamics University, AppDynamics Community, or AppDynamics Support in the past, you would have had an account that was part of our AppDynamics Identity provider. This user account is being moved to Cisco and will be impacted by this change.  Are you having trouble logging in? We want to remind you that AppDynamics is transitioning user identities to Cisco’s identity (sign-in credentials). The migration began on May 15th and is expected to be completed by June 14th. To ensure you’re up to date with this change, please check your inbox (and spam folder) for an email from no-reply@portal.appdynamics.com. This email will confirm whether your identity has been migrated. If you find this email, simply reset your password on your next login. Detailed instructions are available in our AppD CCO Migration Community Article. If you haven’t received the email yet, don’t worry – your identity migration is coming soon. Experiencing Login Issues? If you're having trouble logging into AppDynamics during this migration period, it’s likely because your identity has been migrated, and you need to reset your password. Follow these steps: Reset Your Password If you received the migration email, reset your password using the link provided. If you didn’t receive the email, your migration is still in progress. Check back later.  Still Having Issues? If resetting your password doesn’t resolve the issue, here’s where you can get help: You can submit an inquiry to the CCO ID team by going to web-help.cisco.com. You can also find a “Contact Support” link at the bottom of the login screen, which directs you to the same email. For AppDynamics Product Issues Open a case at appdynamics.com/support or reach out via email at: appd-support@cisco.com  We appreciate your patience during this transition and are here to help with any issues you might encounter.
We’ll explore how to deploy a robust application monitoring solution using AWS ECS (Elastic Container Service) and Cisco AppDynamics. This integration allows businesses to leverage the scalability of... See more...
We’ll explore how to deploy a robust application monitoring solution using AWS ECS (Elastic Container Service) and Cisco AppDynamics. This integration allows businesses to leverage the scalability of AWS and the comprehensive monitoring capabilities of AppDynamics, ensuring applications perform optimally in a cloud environment. What is AWS ECS? AWS ECS (Elastic Container Service) is a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. ECS eliminates the need to install, operate, and scale your own cluster management infrastructure, making it easier to schedule and run Docker containers on AWS. What is AppDynamics? AppDynamics is a powerful application performance management (APM) and IT operations analytics (ITOA) tool that helps monitor, analyze, and optimize complex software environments. It provides real-time visibility and insights into the performance of applications, enabling organizations to quickly detect, diagnose, and resolve issues to improve user experience and operational efficiency. Application Image The application we are deploying is packaged in a Docker image,  abhimanyubajaj98/java-tomcat-wit-otel-app-buildx , which contains a Java-based web application running on Apache Tomcat. This image is enhanced with the OpenTelemetry Java agent to facilitate detailed performance monitoring and telemetry. Configuration Overview Our setup involves several AWS resources managed through Terraform, a popular infrastructure as code (IaC) tool, ensuring our infrastructure is reproducible and maintainable. Below is a high-level overview of our configuration: ECS Cluster AWS ECS Cluster: Acts as the hosting environment for our Docker containers. Task Definitions: Specifies the Docker containers to be deployed, their CPU and memory allocations, and essential configurations such as environment variables and network mode. IAM Roles and Policies IAM Roles and Policies: Ensure proper permissions for ECS tasks to interact with other AWS services, such as retrieving Docker images from ECR and sending logs to CloudWatch. Container Setup Machine Agent Container: Hosts the AppDynamics machine agent, configured to monitor the underlying EC2 instances and collect machine metrics. Java Application Container: Runs the main Java application with OpenTelemetry instrumentation, configured to send telemetry data to AppDynamics. OpenTelemetry Collector Container: Aggregates and forwards telemetry data to the AppDynamics controller. Security and Network Network Mode: Uses host networking to ensure that containers can communicate efficiently and leverage the networking capabilities of the host EC2 instance. Security Groups: Configured to allow appropriate inbound and outbound traffic necessary for operation and monitoring. Detailed Steps and Configuration ECS Cluster Creation: Define an ECS cluster using Terraform to serve as the runtime environment for the containers. Task Definitions: Specify containers that need to be run as part of the ECS service. Include detailed settings for: Image versions CPU and memory requirements Environment variables for configuration Volume mounts for persistent or shared data IAM Configuration: Set up IAM roles and attach policies that grant necessary permissions for ECS tasks, including logging to CloudWatch and accessing ECR for image retrieval. Logging and Monitoring: Configure CloudWatch for logging and set up AppDynamics for advanced monitoring, linking it with OpenTelemetry for comprehensive observability. Deployment and Management: Use Terraform to manage deployment and updates to the infrastructure, ensuring consistency and reproducibility. provider "aws" { region = "us-east-1" # Change to your preferred AWS region } resource "aws_ecs_cluster" "ecs_cluster" { name = "ecs_cluster_with_agents" tags = { owner = "Abhi Bajaj" } } resource "aws_ecs_task_definition" "container_tasks" { family = "container_tasks" network_mode = "host" container_definitions = jsonencode([ { "name" : "machine-agent-container", "uid" : "0", "privileged": true, "image" : "docker.io/appdynamics/machine-agent:root-latest", "cpu" : 256, "memory" : 512, "essential" : true, "environment" : [ { "name" : "APPDYNAMICS_CONTROLLER_HOST_NAME" "value" : "xxx.saas.appdynamics.com" }, { "name" : "APPDYNAMICS_CONTROLLER_PORT" "value" : "443" }, { "name" : "APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY" "value" : "xxx" }, { "name" : "APPDYNAMICS_AGENT_ACCOUNT_NAME" "value" : "xxx" }, { "name" : "APPDYNAMICS_AGENT_ACCOUNT_NAME" "value" : "xxx" }, { "name" : "APPDYNAMICS_AGENT_UNIQUE_HOST_ID" "value" : "machine_agent_ecs" }, { "name" : "APPDYNAMICS_CONTROLLER_SSL_ENABLED" "value" : "true" }, { "name" : "APPDYNAMICS_SIM_ENABLED" "value" : "true" }, { "name" : "APPDYNAMICS_DOCKER_ENABLED" "value" : "true" } ], "mountPoints" : [ { "containerPath" : "/hostroot/proc", "sourceVolume" : "proc", "readOnly" : true }, { "containerPath" : "/hostroot/sys", "sourceVolume" : "sys", "readOnly" : false }, { "containerPath" : "/hostroot/etc", "sourceVolume" : "etc", "readOnly" : false }, { "containerPath" : "/var/run/docker.sock", "sourceVolume" : "docker_sock", "readOnly" : false } // Add more mount points as needed ] }, { "name" : "ecs_with_otel_java_app", "image" : "abhimanyubajaj98/java-tomcat-wit-otel-app-buildx", "cpu" : 512, "memory" : 1024, "network_mode" : "host", "privileged": true, "essential" : true, "environment" : [ { "name" : "JAVA_TOOL_OPTIONS" "value" : "-Dotel.resource.attributes=service.name=ECS_otel_abhi,service.namespace=ECS_otel_abhi" } ] }, { "name" : "OpenTelemetryCollector", "image" : "appdynamics/appdynamics-cloud-otel-collector", "privileged": true, "network_mode" : "host", "memory" : 1024, "cpu" : 512, "ports": [ { "containerPort": 13133, "hostPort": 13133 }, { "containerPort": 4317, "hostPort": 4317 }, { "containerPort": 4318, "hostPort": 4318 } ], "environment" : [ { "name" : "APPD_OTELCOL_CLIENT_ID" "value" : "xxx" }, { "name" : "APPD_OTELCOL_CLIENT_SECRET" "value" : "xxxx" }, { "name" : "APPD_OTELCOL_TOKEN_URL" "value" : "https://xxx-pdx-p01-c4.observe.appdynamics.com/auth/4f8da76d-01a8-4df6-85cd-3a111fba946e/default/oauth2/token" }, { "name" : "APPD_OTELCOL_ENDPOINT_URL" "value" : "https://xxx-pdx-p01-c4.observe.appdynamics.com/data" } ], "mountPoints" : [ { "containerPath" : "/hostroot/etc", "sourceVolume" : "etc", "readOnly" : true }, { "containerPath" : "/hostroot/sys", "sourceVolume" : "sys", "readOnly" : false } ] } ]) volume { name = "proc" host_path = "/proc" } volume { name = "sys" host_path = "/sys" } volume { name = "etc" host_path = "/etc" } volume { name = "docker_sock" host_path = "/var/run/docker.sock" } resource "aws_ecs_service" "container_services" { name = "container-services" cluster = aws_ecs_cluster.ecs_cluster.id task_definition = aws_ecs_task_definition.container_tasks.arn desired_count = 1 } ############################################################################################################## resource "aws_launch_template" "ecs_launch_template" { name = "alma" image_id = "ami-xxxxx" # Amazon ECS-Optimized Amazon Linux 2 (AL2) x86_64 AMI instance_type = "t2.medium" user_data = base64encode(<<EOF #!/bin/bash sudo su echo ECS_CLUSTER=ecs_cluster_with_agents >> /etc/ecs/ecs.config wget https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/debian_amd64/amazon-ssm-agent.deb dpkg -i amazon-ssm-agent.deb systemctl enable amazon-ssm-agent EOF ) vpc_security_group_ids = ["sg-xxxx"] iam_instance_profile { name = aws_iam_instance_profile.dev-resources-iam-profile.name } tag_specifications { resource_type = "instance" tags = { Name = "ECS_with_Agents" Owner = "abhibaj@cisco.com" } } } resource "aws_autoscaling_group" "auto_scaling_group" { name = "ecs_asg" availability_zones = ["us-east-1a", "us-east-1b"] desired_capacity = 1 min_size = 1 max_size = 10 health_check_grace_period = 300 health_check_type = "EC2" launch_template { id = aws_launch_template.ecs_launch_template.id } } resource "aws_ecs_capacity_provider" "provider" { name = "alma" auto_scaling_group_provider { auto_scaling_group_arn = aws_autoscaling_group.auto_scaling_group.arn managed_scaling { status = "ENABLED" target_capacity = 100 minimum_scaling_step_size = 1 maximum_scaling_step_size = 100 } } } resource "aws_ecs_cluster_capacity_providers" "providers" { cluster_name = aws_ecs_cluster.ecs_cluster.name capacity_providers = [aws_ecs_capacity_provider.provider.name] } ############################################# resource "aws_iam_instance_profile" "dev-resources-iam-profile" { name = "ec2_profile_for_services_otel" role = aws_iam_role.dev-resources-iam-role.name } resource "aws_iam_role" "dev-resources-iam-role" { name = "role_for_services_ec2_otel" description = "The role for the developer resources on EC2" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Principal": {"Service": "ec2.amazonaws.com"}, "Action": "sts:AssumeRole" } } EOF tags = { Owner = "abhibaj" } } resource "aws_iam_role_policy_attachment" "dev-resources-ssm-policy" { policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore" role = aws_iam_role.dev-resources-iam-role.name } resource "aws_iam_role_policy_attachment" "ecr_read_only_policy_attachment" { policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly" role = aws_iam_role.dev-resources-iam-role.name } resource "aws_iam_role_policy_attachment" "ecs_full_access_policy_attachment" { policy_arn = "arn:aws:iam::aws:policy/AmazonECS_FullAccess" role = aws_iam_role.dev-resources-iam-role.name } resource "aws_iam_role_policy_attachment" "ecs_ecs_task_execution_policy_attachment" { policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy" role = aws_iam_role.dev-resources-iam-role.name } resource "aws_iam_role_policy_attachment" "ecs-instance-role-attachment" { policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role" role = aws_iam_role.dev-resources-iam-role.name } To deploy-> Please edit ENV Variables subsituted with “xxx” Once done, Run terraform init terraform apply — auto-approve How it will look on your UI is:  
The integration of OpenTelemetry Java agents into your application’s Docker containers represents a significant leap towards enhanced observability and monitoring capabilities. This guide details how... See more...
The integration of OpenTelemetry Java agents into your application’s Docker containers represents a significant leap towards enhanced observability and monitoring capabilities. This guide details how to embed the OpenTelemetry Java Agent into your Dockerfile for a Java application, deploy it on Kubernetes, and monitor its traces using Cisco AppDynamics, thereby providing a robust solution for real-time application performance monitoring. Pre-requisites Ensure you have the following set up and ready: A Kubernetes cluster Docker and Kubernetes command-line tools, docker and kubectl , installed and configured Access to an AppDynamics account for monitoring 1. Preparing Your Dockerfile for Observability The Dockerfile outlined integrates the OpenTelemetry Java agent into a Tomcat server to enable automated instrumentation of your Java application. FROM tomcat:latest RUN apt-get update -y && apt-get -y install wget RUN apt-get install -y curl ADD sample.war /usr/local/tomcat/webapps/ ADD https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/latest/download/opentelemetry-javaagent.jar /tmp ENV JAVA_OPTS="-javaagent:/tmp/opentelemetry-javaagent.jar -Dappdynamics.opentelemetry.enabled=true -Dotel.resource.attributes="service.name=tomcatOtelJavaK8s,service.namespace=tomcatOtelJavaK8s"" ENV OTEL_EXPORTER_OTLP_ENDPOINT=http://appdynamics-collectors-ds-appdynamics-otel-collector.cco.svc.cluster.local:4318 CMD ["catalina.sh","run"] Base Image: Start with tomcat:latest as the base image for deploying a Java web application. Installing Utilities: Update the package list and install necessary utilities like wget and curl for downloading the OpenTelemetry Java agent. Adding Your Application: Use the ADD command to place your  .war file in the webapps directory of Tomcat. Integrating OpenTelemetry: Download the latest OpenTelemetry Java agent using ADD command and set JAVA_OPTS to include the path to the downloaded agent, enabling specific OpenTelemetry configurations. Environment Variables: Define OTEL_EXPORTER_OTLP_ENDPOINT to specify the endpoint of the AppDynamics OTel Collector or your custom otel collector, which will process and forward your telemetry data to AppDynamics. 2. Deploying Your Application on Kubernetes Your deployment YAML file configures Kubernetes to deploy your containerized application, exposing it through a service for external access. --- apiVersion: apps/v1 kind: Deployment metadata: name: java-app-with-otel-agent labels: app: java-app-with-otel-agent namespace: appd-cloud-apps spec: replicas: 1 selector: matchLabels: app: java-app-with-otel-agent template: metadata: labels: app: java-app-with-otel-agent spec: containers: - name: java-app-with-otel-agent image: docker.io/abhimanyubajaj98/java-app-with-otel-agent imagePullPolicy: Always ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: java-app-with-otel-agent labels: app: java-app-with-otel-agent namespace: appd-cloud-apps spec: ports: - port: 8080 targetPort: 8080 selector: app: java-app-with-otel-agent Deployment Configuration: Define a deployment in Kubernetes to manage your application’s replicas, ensuring it’s set to match your application’s requirements. Service Exposure: Create a Kubernetes service to expose your application on a specified port, allowing traffic to reach your application. 3. Setting Up the AppDynamics Otel Collector To monitor your application’s traces in Cisco AppDynamics, deploy the AppDynamics Otel Collector within your Kubernetes cluster. This collector processes traces from your application and sends them to AppDynamics. Collector Configuration: Use the official documentation to deploy the AppDynamics Otel Collector, ensuring it’s correctly configured to receive telemetry data from your application. https://docs.appdynamics.com/observability/cisco-cloud-observability/en/kubernetes-and-app-service-monitoring/install-kubernetes-and-app-service-monitoring Service Discovery: Ensure your application’s deployment is configured to send traces to the collector service, typically through environment variables or configuration files. 4. Monitoring Traces in AppDynamics To produce a load for our Sample App, Exec inside the pod and run curl -v http://localhost:8080/sample/ With your application deployed and the Otel Collector set up, you can now monitor your application’s performance in AppDynamics. Accessing AppDynamics: Log into your AppDynamics dashboard. Viewing Traces: Navigate to the tracing or application monitoring section to view the traces sent from your Kubernetes-deployed application, allowing you to monitor requests, response times, and error rates. Conclusion Integrating the OpenTelemetry Java agent into your Java application’s Dockerfile and deploying it on Kubernetes offers a seamless path to observability. By leveraging Cisco AppDynamics in conjunction with this setup, you gain powerful insights into your application’s performance, helping you diagnose and resolve issues more efficiently. This guide serves as a starting point for developers looking to enhance their application’s observability in a Kubernetes environment.
What do I need to know about default ASP .NET Core hosting modules and how they affect AppDynamics APM .NET Agent configuration?   In this article...  Who would use this workflow? What is a H... See more...
What do I need to know about default ASP .NET Core hosting modules and how they affect AppDynamics APM .NET Agent configuration?   In this article...  Who would use this workflow? What is a Hosting Module?  What are the defaults?  Implementation   InProcess hosting module  OutOfProcess hosting module  Who would use this workflow?   ASP.NET Core has two hosting modules that change how the AppDynamics APM .NET Agent needs to be configured. The default hosting module is different for the different .NET Core versions.   This discussion is largely centered around using Windows.  What is a Hosting Module?    The ASP.NET Core Module (ANCM) is a native IIS module that plugs into the IIS pipeline, allowing ASP.NET Core applications to work with IIS. Run ASP.NET Core apps with IIS by either:  Hosting an ASP.NET Core app inside of the IIS worker process (w3wp.exe), called the in-process hosting model.  Forwarding web requests to a backend ASP.NET Core app running the Kestrel server, called the out-of-process hosting model.  https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/aspnet-core-module    What are the defaults?  .NET Core 1.0 - 2.1 uses InProcess hosting module by default.   In .NET Core 2.2, the default hosting module was switched to OutOfProcess:   https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/aspnet-core-module?view=aspnetcore-2.2#hosting-models-1  Versions of .NET Core later than the writing of this article use InProcess hosting module. Please validate the default by visiting https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/aspnet-core-module.  Note: It is not recommended to use versions of .NET Core that are not LTS. The above text is informational only and it is strongly recommended to keep your version of .NET Core up to date with the latest LTS version. https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core    Implementation with InProcess hosting  If your application is using InProcess hosting module you will implement using the IIS section in the config.xml:   <IIS> <applications> <application path="/" site="ASP NET Core"> <tier name="My InProcess ASP.NET Core Site"/> </application> </applications> </IIS> For more information regarding IIS configuration for the agent, please visit: https://docs.appdynamics.com/appd/23.x/latest/en/application-monitoring/install-app-server-agents/net-agent/install-the-net-agent-for-windows/name-net-tiers     Implementation with OutOfProcess Hosting  If your application is using OutOfProcess hosting module you will implement using the standalone-application section in the config.xml. With OutOfProcess there should be a dotnet.exe or yourapp.exe running.   If yourapp.exe is present:  <standalone-application executable="yourapp.exe"> <tier name="My OutOfProcess NET Core App" /> </standalone-application> If dotnet.exe is present:  <standalone-application executable="dotnet.exe" command-line="yourapp.dll"> <tier name="My OutOfProcess NET Core App" /> </standalone-application>  For more information regarding standalone-application configuration for the agent, please visit: https://docs.appdynamics.com/appd/23.x/latest/en/application-monitoring/install-app-server-agents/net-agent/install-the-net-agent-for-windows/configure-the-net-agent-for-windows-services-and-standalone-applications    The easiest way to confirm your hosting module is to open Task Manager and identify if there are any dotnet.exe or yourapp.exe running on the machine. Please validate that the application is running first by sending a couple of requests to the site. 
If you don’t have an application ready, we’ll use the included sample Tomcat application image in our task definition file.  In this article…   Sample Tomcat application  ECS Permissions  A... See more...
If you don’t have an application ready, we’ll use the included sample Tomcat application image in our task definition file.  In this article…   Sample Tomcat application  ECS Permissions  ADOT Role |  ADOTTaskRole  Build your own image  Additional resources  Sample Tomacat application In the following, you will need to edit all the sections marked “XXXXX”  { "family": "aws-opensource-otel", "containerDefinitions": [ ##### Application image { "name": "aws-otel-emitter", "image": "docker.io/abhimanyubajaj98/tomcat-app-buildx:latest", "cpu": 0, "portMappings": [ { "name": "aws-otel-emitter", "containerPort": 8080, "hostPort": 8080, "protocol": "tcp", "appProtocol": "http" } ], "essential": true, "environment": [ { "name": "APPDYNAMICS_AGENT_ACCOUNT_NAME", "value": "XXXXX" }, { "name": "APPDYNAMICS_AGENT_TIER_NAME", "value": "abhi-tomcat-ecs" }, { "name": "APPDYNAMICS_CONTROLLER_PORT", "value": "443" }, { "name": "JAVA_TOOL_OPTIONS", "value": "-javaagent:/opt/appdynamics/javaagent.jar" }, { "name": "APPDYNAMICS_AGENT_APPLICATION_NAME", "value": "abhi-ecs-fargate" }, { "name": "APPDYNAMICS_CONTROLLER_HOST_NAME", "value": " XXXXX.saas.appdynamics.com" }, { "name": "APPDYNAMICS_JAVA_AGENT_REUSE_NODE_NAME_PREFIX", "value": "abhi-tomcat-ecs" }, { "name": "APPDYNAMICS_CONTROLLER_SSL_ENABLED", "value": "true" }, { "name": "APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY", "value": " XXXXX " }, { "name": "APPDYNAMICS_JAVA_AGENT_REUSE_NODE_NAME", "value": "true" } ], "mountPoints": [], "volumesFrom": [ { "sourceContainer": "appdynamics-java-agent" } ], "dependsOn": [ { "containerName": "appdynamics-java-agent", "condition": "START" } ], "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-create-group": "True", "awslogs-group": "/ecs/ecs-aws-otel-java-tomcat-app", "awslogs-region": "us-west-2", "awslogs-stream-prefix": "ecs" } }, "healthCheck": { "command": [ "CMD-SHELL", "curl -f http://localhost:8080/sample || exit1" ], "interval": 300, "timeout": 60, "retries": 10, "startPeriod": 300 } }, #####Java Agent configuration { "name": "appdynamics-java-agent", "image": "docker.io/abhimanyubajaj98/java-agent-ecs", "cpu": 0, "portMappings": [], "essential": false, "environment": [], "mountPoints": [], "volumesFrom": [], "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-create-group": "true", "awslogs-group": "/ecs/java-agent-ecs", "awslogs-region": "us-west-2", "awslogs-stream-prefix": "ecs" } } } ], "taskRoleArn": "arn:aws:iam::778192218178:role/ADOTRole", "executionRoleArn": "arn:aws:iam::778192218178:role/ADOTTaskRole", "networkMode": "bridge", "requiresCompatibilities": [ "EC2" ], "cpu": "256", "memory": "512" } ECS Permissions  Your ECS task should have the appropriate permission. For the example here, I created a taskRole ADOTRole and taskexecutionrole ADOTTaskRole.   ADOTRole Permission Policy  The permission policy for ADOTRole looks as follows:  { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream", "logs:DescribeLogStreams", "logs:DescribeLogGroups", "logs:PutRetentionPolicy", "xray:PutTraceSegments", "xray:PutTelemetryRecords", "xray:GetSamplingRules", "xray:GetSamplingTargets", "xray:GetSamplingStatisticSummaries", "cloudwatch:PutMetricData", "ec2:DescribeVolumes", "ec2:DescribeTags", "ssm:GetParameters" ], "Resource": "*" } ] } ADOTTaskRole Permission Policy  The permission policy for ADOTTaskRole looks like: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream", "logs:DescribeLogStreams", "logs:DescribeLogGroups", "logs:PutRetentionPolicy", "xray:PutTraceSegments", "xray:PutTelemetryRecords", "xray:GetSamplingRules", "xray:GetSamplingTargets", "xray:GetSamplingStatisticSummaries", "cloudwatch:PutMetricData", "ec2:DescribeVolumes", "ec2:DescribeTags", "ssm:GetParameters" ], "Resource": "*" } ] } Build your own image  Going back to the template, you can build your own image as well. The Dockerfile for the image can be found here, along with the task definition file  https://github.com/Abhimanyu9988/ecs-java-agent  Additional resources  To understand more about the AppDynamics Java Agent,  see the following in the Documentation portal  https://docs.appdynamics.com/appd/22.x/22.12/en/application-monitoring/install-app-server-agents/jav...