When I invoke the C# SDK example search() program to retrieve the same test data I submitted, I get some of my results printed to the command window, but then an exception is thrown:
Unhandled Exception: System.Net.WebException: The request was aborted: The connection was closed unexpectedly
I've learned the following about my error:
The spunk log file shows that the search job being run is completing successfully.
(1) is borne out by observing that the following code within Program.cs of search() completes successfully (commenting out the parts in Program.cs that follow where the data is actually accessed and written to the console):
while (!job.IsDone)
{
Thread.Sleep(1000);
}
// ...
The timeout error is occurring within the following block of code within Program.cs:
// ...
using (var stream = job.Results(outArgs)) {
using (var rr = new ResultsReaderXml(stream)) {
foreach (var @event in rr) {
System.Console.WriteLine("EVENT:");
foreach (string key in @event.Keys) {
System.Console.WriteLine(" " + key + " -> " + @event[key]);
}
}
}
}
The index being searched here has 43 events in it (all of which were put there via invocations of the submit() example program). The preceding loop is able to write about 26 of those entries to the screen before the "connection closed unexpectedly" exception is thrown. It would seem that the error has something to do with a timeout on the underlying stream supplied to the ResultsReaderXML. I've attempted to change a variety of timeout settings on that stream before passing it to the constructor, but there's no change in the behavior.
I'll just add that I've done nothing here beyond installing Splunk with all default configurations, downloading the C# SDK and building it, and then attempting to run the search() program. It's somewhat surprising that I'm encountering this error, as the index being searched is trivially small. Is the C# SDK still under development? Thanks.
-Andy
This is likely due to a problem with C# SDK and .NET. Thank you folks very much for investigating, reporting and detailed information. We will be working on a fix. In the meantime, you can apply the following work around.
Replace the following line in the search example included in the SDK:
using (var stream = job.Results(outArgs))
with two lines below:
var response = job.Service.Get(job.Path + "/results", outArgs);
using (var stream = response.Content)
More details of the bug:
The Job API is designed to return a .NET stream to applications to consume. The stream object is obtained through an HTTPWebResponse. After the stream object is returned, the HTTPWebResponse object is subject to garbage collection. If that happens before the stream is fully read by the application, the application will fail.
Again thank you very much and we apologize for this.
This is definitely incorrect implementation of Service class. It should return ResponseMessage instead of Stream.
And ResponseMEssage also should not have Finalizer method implemented - this is the root cause of such an error. Finalizer is executed in unpredictable time and closes web response and underlying stream in it. The right way is that ResponseMessage should implement IDisposable interface with Dispose method like
public void Dispose()
{
if (response != null)
{
response.Close();
response = null;
}
if (content != null)
{
content.Dispose();
content = null;
}
}
and Service methods e.g. Export should return ResponseMessage instead of Stream. Then the call to the method is going to be like this
using (var response = service.Export(search, searchArgs))
{
MultiResultsReaderXml resultsReaderXml = new MultiResultsReaderXml(response.Content);
foreach (var result in resultsReaderXml)
{
foreach (var evnt in result)
{
}
}
}
Thanks,
Sergei
Grant, regarding to the connection closed issue when creating a job (ie. service.GetJobs(searchArgs).Create(query, jobArgs), could you try the following:
Replaced the following line:
- var job = jobs.Create((string)cli.Opts["search"]);
with:
var query = ((string)cli.Opts["search"]);
var args = new Args("search", query);
var path = "/services/search/jobs";
var createResponse = service.Post(path, args);
/* assert(response.getStatus() == 201); */
var streamReader = new StreamReader(createResponse.Content);
var doc = new XmlDocument();
doc.LoadXml(streamReader.ReadToEnd());
var sid = doc.SelectSingleNode("/response/sid").InnerText;
var job = new Job(service, path + "/" + sid);
Without this, all jobs will be loaded into a collection. With the change, only the newly created job will be loaded.
Please let me know how it goes.
Thanks.
We fixed this problem by passing a reference of the WebRequest itself (inside the Send() method in HttpService.cs) to the ResponseMessage instance created at the end. Basically, keep the web request alive as long as the ResponseMessage is being used.
HOWEVER, oddly enough, this only removes the disconnects in Debug builds (not just while debugging, but any .exe built in debug), and not in Release builds. I can't figure out why.
[In ResponseMessage.cs]
private HttpWebRequest request;
public ResponseMessage(int status, Stream content, HttpWebResponse response, HttpWebRequest request)
{
this.request = request;
this.status = status;
this.content = content;
this.response = response;
}
[in HttpService.cs, Send() method]
public virtual ResponseMessage Send(string path, RequestMessage request)
{
(...)
ResponseMessage returnResponse =
new ResponseMessage(status, input, response, webRequest);
}
Hope this helps!
The reason that this fix does not work could be that ResponseMessage itself was disposed by the CLR garbage collector. Refer to Sergei's earlier post.
I tried this fix in debug build (as per the fix) and I am still losing the stream before it is completely read. It only works with the work around from Ywu's earlier post.
This is likely due to a problem with C# SDK and .NET. Thank you folks very much for investigating, reporting and detailed information. We will be working on a fix. In the meantime, you can apply the following work around.
Replace the following line in the search example included in the SDK:
using (var stream = job.Results(outArgs))
with two lines below:
var response = job.Service.Get(job.Path + "/results", outArgs);
using (var stream = response.Content)
More details of the bug:
The Job API is designed to return a .NET stream to applications to consume. The stream object is obtained through an HTTPWebResponse. After the stream object is returned, the HTTPWebResponse object is subject to garbage collection. If that happens before the stream is fully read by the application, the application will fail.
Again thank you very much and we apologize for this.
Checking the admin page, I see 65 jobs under the app.
Now I'm seeing a slightly different error, which happens within ~3 seconds of calling Jobs.Create()
System.IO.IOException: Unable to read data from the transport connection: A blocking operation was interrupted by a call to WSACancelBlockingCall. ---> System.Net.Sockets.SocketException: A blocking operation was interrupted by a call to WSACancelBlockingCall
stream is working with this work around, thanks ywu.
Do you know how many jobs are retrieved from the Splunk server? How long does it take for the call to fail with the exception?
Yeah, I'm getting the collection of Jobs (with searchArgs that limit by a specific app) and then trying to use that collection with Create()
JobCollection splunkJobs = service.GetJobs(searchArgs);
Job job = splunkJobs.Create(searchQuery, jobArgs);
Grant, I see that JobCollection.Create function is on the call stack. This function can be optimized for the case where there are lots of jobs on the Splunk server. That is why I ask the question. If this is indeed your case, we can try a workaround that replaces Jobs.Create.
I think that this is unlikely to be due to the same problem. Do you have a lot of jobs on the Splunk server when this happens?
at System.Xml.XmlDocument.Load(Stream inStream)
at Splunk.ResourceCollection1.Refresh() in c:\Users\Andy\Documents\GitHub\sp
1.Get(Object key) in c:\Users\Andy\Documents\Git
lunk-sdk-csharp\SplunkSDK\ResourceCollection.cs:line 497
at Splunk.ResourceCollection
Hub\splunk-sdk-csharp\SplunkSDK\ResourceCollection.cs:line 315
at Splunk.JobCollection.Create(String query, Args args) in c:\Users\Andy\Docu
ments\GitHub\splunk-sdk-csharp\SplunkSDK\JobCollection.cs:line 94
System.Net.WebException: The request was aborted: The connection was closed unexpectedly.
at System.Net.ConnectStream.Read(Byte[] buffer, Int32 offset, Int32 size)
at System.Xml.XmlTextReaderImpl.ReadData()
at System.Xml.XmlTextReaderImpl.ParseText(Int32& startPos, Int32& endPos, Int32& outOrChars)
at System.Xml.XmlTextReaderImpl.ParseText()
at System.Xml.XmlTextReaderImpl.ParseElementContent()
at System.Xml.XmlLoader.LoadNode(Boolean skipOverWhitespace)
at System.Xml.XmlLoader.LoadDocSequence(XmlDocument parentDoc)
at System.Xml.XmlDocument.Load(XmlReader reader)
Could you post or send me the exception call stack?
I'm seeing the same connection closed issue when creating a job (ie. service.GetJobs(searchArgs).Create(query, jobArgs);
Do you have a workaround for this also?
Thanks for the response. I will give this a try.
Update: When I compile the search code as a .NET version 3.5 .dll (for use as a CLR SQL Server stored procedure), it works reliably (so far) without the connection closed issue. If I set the .NET target to 4.0 in the search project, then the search console application works as it comes out-of-the-box with the C# SDK. As a .NET 3.5 console application, I've had intermittent success with calling System.Net.ServicePointManager.SetTCPKeepAlive(true, 100000, 5) within Job#Results(Args args), but it doesn't always work (i.e. the "connection closed unexpectedly" error returns).
I'm also seeing something similar. The weird thing is that everything works great when running under the debugger, but it fails if I run it from the .EXE.
I have the same issue. Getting "The connection was closed unexpectedly" exception. Tried with different search criteria to bring fewer result size, changing timeout for HttpWebRequest, etc. Still having no luck. Beginning to wonder if the C# SDK implementation still needs some fine tuning.