Hello Everyone,
I have got the cluster agent & operator installed successfully and trying to auto-instrument the java agent. Cluster Agent is able to pickup the rules and identify the pod for instrumentation.
When it starts creating the replica set, it's crashes the new pod with below error:
➜ kubectl logs --previous --tail 100 -p pega-web-d8487455c-5btrc
____ ____ _
| _ \ ___ __ _ __ _ | _ \ ___ ___| | _____ _ __
| |_) / _ \/ _` |/ _` | | | | |/ _ \ / __| |/ / _ \ '__|
| __/ __/ (_| | (_| | | |_| | (_) | (__| < __/ |
|_| \___|\__, |\__,_| |____/ \___/ \___|_|\_\___|_|
|___/ v3.1.790
..
..
NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED
Picked up JAVA_TOOL_OPTIONS: -Dappdynamics.agent.accountAccessKey=640299c1-a74f-47fc-96df-63e1e7188146 -Dappdynamics.agent.reuse.nodeName=true -Dappdynamics.socket.collection.bci.enable=true -javaagent:/opt/appdynamics-java/javaagent.jar
Error opening zip file or JAR manifest missing : /opt/appdynamics-java/javaagent.jar
Error occurred during initialization of VM
agent library failed to init: instrument
Cluster Agents Logs:
[DEBUG]: 2023-11-23 00:48:09 - logpublisher.go:135 - Finished with all the requests for publishing logs
[DEBUG]: 2023-11-23 00:48:09 - logpublisher.go:85 - Time taken for publishing logs: 187.696677ms
[ERROR]: 2023-11-23 00:48:09 - executor.go:73 - Command basename `find /opt/appdynamics-java/ -maxdepth 1 -type d -name '*ver*'` returned an error when exec on pod pega-web-d8487455c-5btrc. unable to upgrade connection: container not found ("pega-pega-web-tomcat")
[WARNING]: 2023-11-23 00:48:09 - executor.go:78 - Issues getting exit code of command 'basename `find /opt/appdynamics-java/ -maxdepth 1 -type d -name '*ver*'`' in container pega-pega-web-tomcat in pod ccre-sandbox/pega-web-d8487455c-5btrc
[ERROR]: 2023-11-23 00:48:09 - javaappmetadatahelper.go:45 - Failed to get version folder name in container: pega-pega-web-tomcat, pod: ccre-sandbox/pega-web-d8487455c-5btrc, verFolderName: '', err: failed to find exit code
[WARNING]: 2023-11-23 00:48:09 - podhandler.go:149 - Unable to find node name in pod ccre-sandbox/pega-web-d8487455c-5btrc, container pega-pega-web-tomcat
[DEBUG]: 2023-11-23 00:48:09 - podhandler.go:75 - Pod ccre-sandbox/pega-web-d8487455c-5btrc is in Pending state with annotations to be u
Kubectl exec throwing same error:
➜ kubectl -n sandbox get pod
NAME READY STATUS RESTARTS AGE
pega-backgroundprocessing-59f58bb79d-xm2bt 1/1 Running 3 (3h34m ago) 9h
pega-batch-8f565b465-khvgd 0/1 Init:0/1 0 2s
pega-batch-9ff8965fd-k4jgb 1/1 Running 3 (3h34m ago) 9h
pega-web-55c7459cb9-xrhl9 0/1 Init:0/1 0 1s
pega-web-57497b54d6-q28sr 1/1 Running 16 (3h33m ago) 9h
➜ kubectl -n sandbox get pod
NAME READY STATUS RESTARTS AGE
pega-backgroundprocessing-59f58bb79d-xm2bt 1/1 Running 3 (3h34m ago) 9h
pega-batch-5d6c474c85-rkmxx 0/1 ContainerCreating 0 2s
pega-batch-8f565b465-khvgd 0/1 Terminating 0 7s
pega-batch-9ff8965fd-k4jgb 1/1 Running 3 (3h34m ago) 9h
pega-web-55c7459cb9-xrhl9 0/1 Terminating 0 6s
pega-web-57497b54d6-q28sr 1/1 Running 16 (3h33m ago) 9h
pega-web-d8487455c-5btrc 0/1 Pending 0 1s
➜ kubectl exec -it pega-batch-5d6c474c85-rkmxx -n sandbox bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: unable to upgrade connection: container not found ("pega-pega-batch-tomcat")
Also workload pod is running with custom user and not as root user, not sure if that's an issue for permission to copy the java agent binary? I assume it should not.
I would appreciate if you could point in the right direct to troubleshoot this issue.
PS: We are able to successfully do init-container method to instrument java-agent and it works fine.
Document reference: https://docs.appdynamics.com/appd/23.x/latest/en/infrastructure-visibility/monitor-kubernetes-with-t...
Solution in mycase: Since i was using ArgoCD for deployment, it was overwriting new changes Appd Cluster Agent as part of sync, hence the agents were getting terminating. Also i had to include below in my instrumentation rules as my container was running as non-root for appd to work..
runAsUser: 9001
runAsGroup: 9001
Hi @Simon.Rajanpaul,
Thank you so much for coming back many months later and sharing a solution. I love to see it!
Solution in mycase: Since i was using ArgoCD for deployment, it was overwriting new changes Appd Cluster Agent as part of sync, hence the agents were getting terminating. Also i had to include below in my instrumentation rules as my container was running as non-root for appd to work..
runAsUser: 9001
runAsGroup: 9001