I installed the app and followed the instructions on how to create an app registration and give the LogAnalytics API permissions. I created an input with all the proper credentials and IDs, but I don't see any data flowing in.
I tried the query index=_internal log_level=err OR log_level=warn loganalytics*
but I get zero results.
I tried the query index=_internal loganalytics*
and I got some logs with the following errors:
09-04-2018 15:39:29.147 -0400 ERROR ExecProcessor - message from "python /Data/splunk/etc/apps/TA-ms-loganalytics/bin/log_analytics.py" ERRORHTTP 503 Service Unavailable -- KV Store initialization failed. Please contact your system administrator.
2018-09-04 15:39:29,119 ERROR pid=5241 tid=MainThread file=base_modinput.py:log_error:307 | Get error when collecting events.
Traceback (most recent call last):
File "/Data/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/modinput_wrapper/base_modinput.py", line 127, in stream_events
self.collect_events(ew)
File "/Data/splunk/etc/apps/TA-ms-loganalytics/bin/log_analytics.py", line 96, in collect_events
input_module.collect_events(self, ew)
File "/Data/splunk/etc/apps/TA-ms-loganalytics/bin/input_module_log_analytics.py", line 36, in collect_events
if helper.get_check_point('last_date'):
File "/Data/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/modinput_wrapper/base_modinput.py", line 518, in get_check_point
self._init_ckpt()
File "/Data/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/modinput_wrapper/base_modinput.py", line 509, in _init_ckpt
scheme=dscheme, host=dhost, port=dport)
File "/Data/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/solnlib/modular_input/checkpointer.py", line 166, in __init__
scheme, host, port, **context)
File "/Data/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/solnlib/utils.py", line 167, in wrapper
raise last_ex
HTTPError: HTTP 503 Service Unavailable -- KV Store initialization failed. Please contact your system administrator.
How do I figure out what's wrong?
Your kvstore is failing and this app uses it to store checkpoints.
9 times out of 10, when I see this error it’s an expired server.pem.
Try this command:
/Data/splunk/bin/splunk cmd openssl x509 -noout -dates -in /Data/splunk/etc/auth/server.pem
The password is password.
If the end date is in the past the easiest fix is to rename server.pem to server.pem.old and then restart splunk. When splunk starts it will regenerate a new server.pem and that typically fixes kvstore not starting issues.
Of course it could be something like failing to bind to the kvstore port too.
Best method to find out more is to check the _internal index for sourcetype=mongo*.
hi @fredshino,
Did the answer below solve your problem? If so, please resolve this post by approving it. Also, if you're feeling generous, give out an upvote to the user that helped ya. Our users love them upvotes. 🙂
If you still need help, keep us updated so that someone else can solve your problem.
Your kvstore is failing and this app uses it to store checkpoints.
9 times out of 10, when I see this error it’s an expired server.pem.
Try this command:
/Data/splunk/bin/splunk cmd openssl x509 -noout -dates -in /Data/splunk/etc/auth/server.pem
The password is password.
If the end date is in the past the easiest fix is to rename server.pem to server.pem.old and then restart splunk. When splunk starts it will regenerate a new server.pem and that typically fixes kvstore not starting issues.
Of course it could be something like failing to bind to the kvstore port too.
Best method to find out more is to check the _internal index for sourcetype=mongo*.
You were right, the cert was expired. I fixed it but now I'm getting a different error:
2018-09-06 09:34:26,051 DEBUG pid=20177 tid=MainThread file=connectionpool.py:_new_conn:809 | Starting new HTTPS connection (1): api.loganalytics.io
2018-09-06 09:34:26,122 ERROR pid=20177 tid=MainThread file=base_modinput.py:log_error:307 | Get error when collecting events.
Traceback (most recent call last):
File "/Data/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/modinput_wrapper/base_modinput.py", line 127, in stream_events
self.collect_events(ew)
File "/Data/splunk/etc/apps/TA-ms-loganalytics/bin/log_analytics.py", line 96, in collect_events
input_module.collect_events(self, ew)
File "/Data/splunk/etc/apps/TA-ms-loganalytics/bin/input_module_log_analytics.py", line 72, in collect_events
response = requests.post(uri,json=search_params,headers=headers)
File "/Data/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/requests/api.py", line 110, in post
return request('post', url, data=data, json=json, **kwargs)
File "/Data/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/requests/api.py", line 56, in request
return session.request(method=method, url=url, **kwargs)
File "/Data/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/requests/sessions.py", line 488, in request
resp = self.send(prep, **send_kwargs)
File "/Data/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/requests/sessions.py", line 609, in send
r = adapter.send(request, **kwargs)
File "/Data/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/requests/adapters.py", line 473, in send
raise ConnectionError(err, request=request)
ConnectionError: ('Connection aborted.', error(104, 'Connection reset by peer'))
Any insights?
One step closer now!
Save that fix for the future, you’ll see it again I promise.
Your new error is “connection reset by peer”. That can be many things. I’ve seen it happen most often when I’m hitting an API endpoint that doesn’t exist. If that’s the case for you, then it might be something like the wrong workspace ID or some other setting in the inputs config.
Several of those get added to the final URL that the app pullls the data from. If any are wrong, you might have this error.
Can you share your inputs.conf?
Is there a way to privately share that with you? I don't see a private message option here.
My information security department would kill me if I shared all the IDs in public.
You can email me and we can setup a webex. Do not share publicly... not sure what i was thinking even asking
Thank you so much for your prompt support! I fixed the issue, our firewall was blocking the outbound calls to api.loganalytics.io. After opening up the FW, logs started flowing!
Thanks again!
Cheers,
I was 1 for 2!