Splunk AppDynamics

UEM monitoring to identify slowness problem

CommunityUser
Splunk Employee
Splunk Employee

Hi Team,

While analysing the response slowness issue for an application across two locations, i'm bit puzzled on how to get to the prone zone.

I've an application running in Mumbai and Chennai location, in real time response value for Chennai is normal and for mumbai users experience much slowness.

So i tried to navigate it in UE to see how the things going there., but on the whole i got to see Avg response time for chennai is larger than Mumbai, im not able to pin down that slowness in UE, which mumbai user experiencing.

For example, is it possible to identify cause for slowness via UEM(whether issue is from Network side or application end). How can we use these monitoring data to find out the problem.?

  

Could you pls suggest any way to work around this. Thanks

Labels (1)
0 Karma

Chitra_Lal
Contributor

Hi Soundarajan2

Don't check the Average responce time(as its an aggregated info), you should chek the EURT for pages under both locations to identify which are taking loner time for End user access because this metric shows you the average interval between the time that a user initiates a request and the completion of the page load of the response in the user's browser. Again it gives you a detail of which part took how much time so that if a page has slow performance you can find out what was stuck. Refer to the below image that gives you a high level idea of how these times stand and which part acts where.

 image.png

So if everything on UI side look fine, the metric that you need to check is "Server Connection Time" End User Experience|App|Server Connection Time (ms) which shows the Interval between the time that a user initiates a request and the start of fetching the response document from the server or application task. or the application server time(which shows you how much time was spent in the execution of this request on the app server). Refer to the below doc link for details on Browser RUm timing metric:

https://docs.appdynamics.com/display/PRO44/Browser+RUM+Metrics

Broadly EUM metrics will show you which part took how much of time. SO if you have server side correlation enabled, you can also see whether that extra time was spent in Browser side/network side or on teh server. Also if you have correlation enabled you can always open the APM side related snapshot to see what is taking more time.

Hope this info helps. Do let me know in case you have queries. 

Thank You,

Chitra

0 Karma

CommunityUser
Splunk Employee
Splunk Employee

Hi Chitra,

Thanks for your support !!

I've attached doc with few doubts on the previous reply.

Could you please help on that.


Regards,

Soundarajan

0 Karma

Chitra_Lal
Contributor

Hi Soundarajan,

1) The end user response time(EURT) for a page request is calculated till the onLoad event for the page is complete and then only beacons for the page request is sent. At present from the EUM perspective, we capture EURT via end to end call from the page, i.e. Average interval between the time that a user initiates a request and the completion of the page load of the response in the user's browser.

Ideally its like : Server time = EURT - The total network time - DOM Building time - On Load time

https://docs.appdynamics.com/display/PRO44/Browser+RUM+Metrics#BrowserRUMMetrics-BrowserRUMTimingMet...

So in your example, 2267ms (EURT) is what user will experience(which is an inclusinve of ) and 0ms was the Application Server Time (also called Server Time in the UI), the processing time for requests on the application server. 

2) For this wrong location issue, yes you need to first check the network routing and then additionaly may be you can try setting up the ocations via an ip-mapping file as described here:

https://docs.appdynamics.com/display/PRO44/Install+and+Host+a+Custom+Geo+Server+for+Browser+RUM#Inst...

Thanks,

Chitra

0 Karma
Get Updates on the Splunk Community!

Splunk Observability Synthetic Monitoring - Resolved Incident on Detector Alerts

We’ve discovered a bug that affected the auto-clear of Synthetic Detectors in the Splunk Synthetic Monitoring ...

Video | Tom’s Smartness Journey Continues

Remember Splunk Community member Tom Kopchak? If you caught the first episode of our Smartness interview ...

3-2-1 Go! How Fast Can You Debug Microservices with Observability Cloud?

3-2-1 Go! How Fast Can You Debug Microservices with Observability Cloud? Learn how unique features like ...