Splunk AppDynamics

AppD metrics were incorrect compared to Server Access logs

CommunityUser
Splunk Employee
Splunk Employee

Hello,

There was a Load test in the last week,  There are 4 lines in the attached graph, two from each of 2 different JVMs.  One line of each pair is incoming requests and the other is outgoing requests to RISE, an internal DataPower installation.  They should be in about a 1:1 ratio.  According to Tomcat’s access log, there were about 900-1100 incoming calls per minute on each server, which is what I expect based on the load we were throwing at it.  In addition, the DataPower guy said he saw 60k total requests, which sounds about right (2 servers @1000 requests per minute * about 30 minutes.)

However the numbers in AppD are pretty different.  The incoming calls per minute are too low and the outbound (to DataPower) calls are too high.  At first this caused the developers quite a bit of consternation because they couldn’t understand why there were so many backend calls.  However we think the AppD numbers are incorrect. Can anyone let me know the root cause for this?

Please see the attached metric browser screenshot for reference and any responses are highly appreciated.

Thank you,

Santhosh.

Labels (1)
0 Karma
1 Solution

CommunityUser
Splunk Employee
Splunk Employee

Hi Rajesh,

Sure you can close this, will work on other one.

Thanks,

Rajesh.

View solution in original post

0 Karma

Rajesh_Putta
Communicator

Hi Santosh,

We see that this is a duplicate of https://community.appdynamics.com/t5/Java-Java-Agent-Installation-JVM/AppD-Metric-browser-results-di... . Can we close this one and track it as a part of other one.

Thanks

Rajesh

0 Karma

CommunityUser
Splunk Employee
Splunk Employee

Hi Rajesh,

Sure you can close this, will work on other one.

Thanks,

Rajesh.

0 Karma

Rajesh_Putta
Communicator

Thanks santosh

Regards

Rajesh

0 Karma
Get Updates on the Splunk Community!

Splunk Observability Synthetic Monitoring - Resolved Incident on Detector Alerts

We’ve discovered a bug that affected the auto-clear of Synthetic Detectors in the Splunk Synthetic Monitoring ...

Video | Tom’s Smartness Journey Continues

Remember Splunk Community member Tom Kopchak? If you caught the first episode of our Smartness interview ...

3-2-1 Go! How Fast Can You Debug Microservices with Observability Cloud?

3-2-1 Go! How Fast Can You Debug Microservices with Observability Cloud? Learn how unique features like ...