All Apps and Add-ons

TA for MS Log Analytics - Why is it failing to establish a new connection?

Loves-to-Learn Lots

Hello.

After trying to configure an input for the Log Analytics TA ...

Name = AZURESQL
Interval = 300
Index = XXX
Resource Group = XXX
Workspace ID = XXX
Subscription ID = XXX
Tenant ID = XXX
Application ID = XXX
Application Key = XXX
Log Analytics Query = AzureDiagnostics | where TimeGenerated > ago(5m) | where ResourceProvider == 'MICROSOFT.SQL' | where ResourceGroup contains 'XXX' | where Category == 'SQLSecurityAuditEvents' | where action_name_s !contains 'TRANSACTION' or action_name_s != 'AUDIT SESSION CHANGED' | project TimeGenerated, SubscriptionId, ResourceGroup, LogicalServerName_s, Resource, OperationName, server_instance_name_s, database_name_s, action_name_s, client_ip_s, host_name_s, server_principal_name_s, statement_s
Start Date = 07/10/2020 09:00:00
Event Delay / Lag Time = 15

I received the following error message:

07-29-2020 10:17:40.828 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-ms-loganalytics/bin/log_analytics.py" ERRORHTTPSConnectionPool(host='api.loganalytics.io', port=443): Max retries exceeded with url: /v1/workspaces/a49c6f91-5bf9-472f-bd14-746fd02d78f0/query (Caused by NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7f903746c890>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution',))

What is the best way to resolve this?

Regards,
Max

Labels (1)
Tags (1)
0 Karma
1 Solution

SplunkTrust
SplunkTrust

This is from your error message: 'api.loganalytics.io', port=443

so you need to open routes and ports from your server to api.loganalytics.io port 443.

View solution in original post

0 Karma

Champion

The issue is more related to proxy. If you have proxy in your org. Use proxy details in splunk users home bashrc file.

————————————
If this helps, give a like below.
0 Karma

Loves-to-Learn Lots

Thank you for providing the caveat.  I will set the lag time to 5 minutes and see what happens.  Hopefully, we will not miss events.  Our goal is to setup alerts as close to real-time as possible.

As for the proxy, we have direct Internet access to the https://api.loganalytics.io endpoint.  Are there any other things we should look at?

 

0 Karma

SplunkTrust
SplunkTrust

I'm closing this thread as answered.  Please open a new thread for additional questions.

0 Karma

SplunkTrust
SplunkTrust

This is from your error message: 'api.loganalytics.io', port=443

so you need to open routes and ports from your server to api.loganalytics.io port 443.

View solution in original post

0 Karma

Loves-to-Learn Lots

We are now able to perform a "nslookup" to api.loganalytics.io from the Heavy Forwarder.  Now, we are not getting any error messages after searching "index=_internal log_level=err* OR log_level=warn* loganalytics*".  Thanks!

But when I perform a search against the index, I am not seeing any data being ingested.  How can I troubleshoot this issue?  We do have other data sources being ingested using this HF.

 

0 Karma

SplunkTrust
SplunkTrust

Please try a command that will test port connectivity.

nslookup is for dns testing.

ping tests icmp

you need to test port connectivity.

telnet and curl are both great for this.

 

0 Karma

Loves-to-Learn Lots

As suggested, I performed two tests to check the port ...

telnet api.loganalytics.io 443
Trying xx.xx.xxx.xx...
Connected to xx.xx.cloudapp.azure.com.
Escape character is '^]'.
Connection closed by foreign host.

nc -vz api.loganalytics.io 443
Connection to api.loganalytics.io 443 port [tcp/https] succeeded!

Based on the results, the Heavy Forwarder is able to communicate with api.loganalytics.io on port 443.

0 Karma

SplunkTrust
SplunkTrust

Ok, try a much simpler query and see if it works.

the api only supports a certain version of the log analytics query language.  Some commands won't work.

0 Karma

Loves-to-Learn Lots

Thanks.  I was able to run the following query ...

AzureMetrics | take 10

Now, there are results in the index.  Can you either provide me with a list of commands which are supported or the last version of Log Analytics which is compatible with the TA?  This will be helpful as we start at developing queries for extracting Azure log data for Azure SQL and CosmosDB.

 

0 Karma

SplunkTrust
SplunkTrust

Sorry but I don't even have a development environment for this.  I rely on the community to make suggestions and if possible, edits.

0 Karma

Loves-to-Learn Lots

With regards to the Log Analytics query, we broke the query apart.  We were able to run the following query without issue ...

AzureDiagnostics | where ResourceProvider == 'MICROSOFT.SQL' | where ResourceGroup contains 'XXXXX' | where Category == 'SQLSecurityAuditEvents' | take 10

But when we add the following clause ...

| where TimeGenerated > ago(5m)

We do not get back results.  Is it possible the ">" character needs special handling by Splunk prior to sending the query to Log analytics?

 

0 Karma

SplunkTrust
SplunkTrust

Possibly

you can try &gt;  instead of >


0 Karma

Loves-to-Learn Lots

Thanks.  Actually, we utilized the following Log Analytics query and it works as expected ...

AzureDiagnostics | where ResourceProvider == 'MICROSOFT.SQL' | where ResourceGroup contains 'XXXXXXX' | where Category == 'SQLSecurityAuditEvents'

When configuring the input, we set the Event Delay / Lag Time = 15 (default).  Based on the sub-text, this parameter represents the number of minutes to look into the past.  Events flow into Log Analytics in 5 minute intervals.  Also, the Interval = 300 which represents the time interval of input in seconds.  Would it be possible to ingest data from Log Analytics with a lag time of less than 5 minutes?

 

 

0 Karma

SplunkTrust
SplunkTrust

"Would it be possible to ingest data from Log Analytics with a lag time of less than 5 minutes?"

Yes but due to the way azure doesn't guarantee timeliness of the data, you will end up missing events.

We built the lag in to overcome the issue. At least 15 minutes of lag is recommended.

0 Karma