hi
i am having issues where some of the sourcetypes are not getting data in splunk from LA, upon checking some logs i can see below :
2021-03-11 13:07:39,577 ERROR pid=13031 tid=MainThread file=base_modinput.py:log_error:307 | OMSInputName="MyInput" status="400" step="Post Query" response="{"error":{"message":"Response size too large","code":"ResponseSizeError","correlationId":"XXX","innererror":{"code":"ResponseSizeError","message":"Maximum response size of 67108864 bytes exceeded. Actual response Size is 73664723 bytes."}}}"
2021-03-11 13:07:39,577 ERROR pid=13031 tid=MainThread file=base_modinput.py:log_error:307 | Get error when collecting events.
Traceback (most recent call last):
File "$splunkhome$/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/modinput_wrapper/base_modinput.py", line 127, in stream_events
self.collect_events(ew)
File "$splunkhome$/etc/apps/TA-ms-loganalytics/bin/log_analytics.py", line 96, in collect_events
input_module.collect_events(self, ew)
File "$splunkhome$/etc/apps/TA-ms-loganalytics/bin/input_module_log_analytics.py", line 86, in collect_events
for i in range(len(data["tables"][0]["rows"])):
UnboundLocalError: local variable 'data' referenced before assignment
2021-03-11 13:08:18,608 ERROR pid=13216 tid=MainThread file=base_modinput.py:log_error:307 | OMSInputName="MyInput2" status="400" step="Post Query" response="{"error":{"message":"Response size too large","code":"ResponseSizeError","correlationId":"XXX","innererror":{"code":"ResponseSizeError","message":"Maximum response size of 67108864 bytes exceeded. Actual response Size is 73136457 bytes."}}}"
2021-03-11 13:08:18,608 ERROR pid=13216 tid=MainThread file=base_modinput.py:log_error:307 | Get error when collecting events.
Traceback (most recent call last):
File "$splunkhome$/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/modinput_wrapper/base_modinput.py", line 127, in stream_events
self.collect_events(ew)
File "$splunkhome$/etc/apps/TA-ms-loganalytics/bin/log_analytics.py", line 96, in collect_events
input_module.collect_events(self, ew)
File "$splunkhome$/etc/apps/TA-ms-loganalytics/bin/input_module_log_analytics.py", line 86, in collect_events
for i in range(len(data["tables"][0]["rows"])):
UnboundLocalError: local variable 'data' referenced before assignment
Am i hitting any limitation ? if so, any way to overcome this ?
any suggestions appreciated... @jkat54
here goes another input, but it got stopped with different limit.
"innererror":{"code":"ResponseSizeError","message":"Maximum response size of 100000000 bytes exceeded. Actual response Size is 103052502 bytes."
now the limit says it has exceeded 100MB, can anyone explain ?
Even I am facing same issue.
response="{"error":{"message":"Response size too large","code":"ResponseSizeError","correlationId":"XXX","innererror":
Hi I'm getting the exact same issue, it started with one workspace a couple of weeks ago now i have 5 of the 6 of them faulting with the error.
I believe there is a 67 MB limit on the response size.
I've trimmed back the time from 300 seconds down to 120 seconds and still get the same error.
We're using generic "search *" queries to get all the tables,
I'm meeting with devops this afternoon to go over it.
exactly... reducing the interval doesn't help, as its some kind of response limit it is hitting @JimboSlice do comment here if you are able to trace it out anything... much appreciated!!
We have sent this on to our azure support team to get to the bottom of it, it appeared yesterday that when running queries in the management console for LAWS, this same error came up and we think something has changed on the azure side.
right, noticed the same while querying at LA, will reach out to our counterparts and see, will keep posted.
Oh yeah i sure will, i think if devops confirm that nothing extra has been added to the LAWS (tables or bulks in to the tables) then we will open a support ticket with azure.
There doesn't seem to be any buffer settings under the hood and from what ive read elsewhere there is a 67MB limit on the azure side, devops originally told me this a few weeks ago and ive seen this on the web on a powerBI forum, but why this is suddenly happening to us all at the same time arouses suspicion.
Azure logs are the worst logs on the planet, constant issues (also event hubs).
Hi @JimboSlice ,
By any chance do you have any update from Azure side?
I'm also having the same error, did you guys able to fix it? @jkat54 can you please help us here?
Hi @raja_mta , i was able to make it work by increasing the interval for one of the input, as interval is set as 300 seconds for most of my inputs, however since few weeks i am again facing the same issue with another input, but increasing the interval is not helping now, will let you know if i am able to make it work by any other way.
Hey first off, nice work everyone!
I'd love to help but I don't have a lab to test this in anymore. Certainly does sound something changed on their end, but without a lab, I can't develop a solution.
You can use the contact the developer button on splunkbase to email me and we can discuss your options if you like.
Hi @jkat54 ,
I sent an email to you on this. can you please respond.