This isn't in the SDK level, but in the underlying JSON libraries that we don't control. This isn't rare. So here's how you do this in general so you don't have to deal with giant hunks of data being sent over the pipe. It will also let you resume more straightforwardly in case the connection gets dropped for some reason.
When you call getResults on the Job, you can pass two argument: count and offset. offset is the number of records at the beginning to skip, and count is the number to return after skipping. So you call getResults with offset 0 and count 100, parse those, then call again with offset 100 and count 100, and then offset 200 and count 100, etc.
Here's some code, starting from your example where you've defined jss and waited for it to complete:
int nEventsPerRequest = 100;
JobResultsArgs oparg = new JobResultsArgs(); // This has convenience methods for getting results
oparg.setOffset(n);
for (int offset = 0; offset < jss.getResultCount(); offset += nEventsPerRequest) {
oparg.setCount(nEventsPerRequest);
InputStream res = jss.getResults(oparg);
ResultsReaderJson resultsReader = new ResultsReaderJson(res);
// ...process the results in this batch...
}
I haven't tested that, so it might have fencepost errors, but that will take care of having too much data in the stream.
... View more