AppD Archive

AppDynamics causing race condition error when modifying WCF http headers. (.NET client)

CommunityUser
Splunk Employee
Splunk Employee

Hi there,

We have encountered, and can replicate on our production servers, what seems to be a race condition inside the WebRequestHttpOutput type, in the method PrepareHttpSend(Message message). (Inside WCF)

We believe that this would be fairly uncommonly experienced, and only by those people with high message throughput combined with adding HTTP headers. We have seen this particularly when using AppDynamics, both systems which will add headers for tracing purposes. 

Because our app code also adds headers, when we enable AppDynamics, we start getting NullReferenceExceptions on almost all of our WCF calls. (see below stack trace). Which basically makes using AD impossible.

Is this a known issue by the AppDynamics team? What do other WCF AD users do? 

I see others have also run into similar issues with WCF and headers, see this blog post:

http://dotnet.dzone.com/news/race-condition-when-modifying

2015-04-02 03:56:52.5725 [137]  System.NullReferenceException: Object reference not set to an instance of an object.

Server stack trace: 
   at System.Collections.Specialized.NameObjectCollectionBase.BaseGetKey(Int32 index)
   at System.ServiceModel.Channels.HttpOutput.WebRequestHttpOutput.PrepareHttpSend(Message message)
   at System.ServiceModel.Channels.HttpOutput.Send(TimeSpan timeout)
  at System.ServiceModel.Channels.HttpChannelFactory`1.HttpRequestChannel.HttpChannelRequest.SendRequest(Message message, TimeSpan timeout)
   at System.ServiceModel.Channels.RequestChannel.Request(Message message, TimeSpan timeout)
   at System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout)
   at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout)
   at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs)
   at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation)
   at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message)
0 Karma

Arun_Dasetty
Super Champion

Hi,

While dot net support team got chance to look into this issue as there are right team to judge this, can you confirm the AD dot agent version you are using , if the version is pre 3.9.4 .net agent version, Can you please check the behavior with 3.9.5 or later dot net agent version and also attach the dot net agent logs for .net team reference

Let us know the behavior with 3.9.5 or higher dot net agent in place? while Dot net support team got chance to look into this issue.

Regards,

Arun

0 Karma

CommunityUser
Splunk Employee
Splunk Employee

We are using version 4.0.3.5 of the App server agent. We only recently started using AD and I don't have any logging set up yet.

Just to re-state the issue, we've previously seen an issue where end-to-end tracing using some sort of correlation id in a HTTP header can cause a bug inside WCF (System.Servicemodel namespace). The bug looks like a race condition when modifying headers for a given message. We add our own headers for Authorization to a web service and we think the end-to-end monitoring software, when adding its correlation ids, can trigger this problem in relatively rare cases when both processes are trying to modify headers.

This basically looks like non thread safe code on Microsoft's part, and I'm trying to work out now whether we can modify our code for adding custom HTTP headers so as not to conflict with AD's adding its own headers.

However, we assume that since so many big, enterprise companies use WCF, that the AD .NET team must have seen this issue before and maybe have some suggestions of how we can mitigate or fix this, through either code changes or AD config changes.

Below: Stacktrace of the error we get when we see this issue. Note the exception is being thrown when we are trying to access a member of a collection (the headers) by index. It looks like we are trying to access an index which doesn't exist.

2015-04-14 20:15:30.1693 [452]  System.NullReferenceException: Object reference not set to an instance of an object.

Server stack trace: 
   at System.Collections.Specialized.NameObjectCollectionBase.BaseGetKey(Int32 index)
   at System.ServiceModel.Channels.HttpOutput.WebRequestHttpOutput.PrepareHttpSend(Message message)
   at System.ServiceModel.Channels.HttpOutput.Send(TimeSpan timeout)
   at System.ServiceModel.Channels.HttpChannelFactory`1.HttpRequestChannel.HttpChannelRequest.SendRequest(Message message, TimeSpan timeout)
   at System.ServiceModel.Channels.RequestChannel.Request(Message message, TimeSpan timeout)
   at System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout)
   at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout)
   at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs)
   at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation)
   at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message)

Exception rethrown at [0]: 
   at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
   at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
0 Karma
Get Updates on the Splunk Community!

CX Day is Coming!

Customer Experience (CX) Day is on October 7th!! We're so excited to bring back another day full of wonderful ...

Strengthen Your Future: A Look Back at Splunk 10 Innovations and .conf25 Highlights!

The Big One: Splunk 10 is Here!  The moment many of you have been waiting for has arrived! We are thrilled to ...

Now Offering the AI Assistant Usage Dashboard in Cloud Monitoring Console

Today, we’re excited to announce the release of a brand new AI assistant usage dashboard in Cloud Monitoring ...