All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hi @bedrocho  Do you have any props/transforms on the Searchhead which could be overwriting the sourcetype at searchtime?  What is the output on the SH for  $SPLUNK_HOME/bin/splunk btool props tes... See more...
hi @bedrocho  Do you have any props/transforms on the Searchhead which could be overwriting the sourcetype at searchtime?  What is the output on the SH for  $SPLUNK_HOME/bin/splunk btool props test02_health and $SPLUNK_HOME/bin/splunk btool props test_health Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Ultimate, this worked super fine, but may I ask you to explain to me the logic as to how it is working? why did we strip time in both of the new tokens we created? 
Hi @msatish  Are you able to test the connectivity from your SingulrAI collector within your organisation to the Splunk instance on the URL/port using something like netcat/curl? Please let me know... See more...
Hi @msatish  Are you able to test the connectivity from your SingulrAI collector within your organisation to the Splunk instance on the URL/port using something like netcat/curl? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @dtapia  The `stats` command doesnt allow for a span with splitting by _time, instead you should either use timechart (which I believe would work in this case - just replace "stats" for "timechar... See more...
Hi @dtapia  The `stats` command doesnt allow for a span with splitting by _time, instead you should either use timechart (which I believe would work in this case - just replace "stats" for "timechart")  or you could use the `bin` command (`| bin _time span=1min` BEFORE the `stats` command then you can use | stats count..sum...etc.. BY _time (without the span) and it will be in 1 min blocks. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @Devika_20  I think Azure AD-protected URL expects (OAuth 2.0 / OpenID Connect) authentication rather than Basic auth, its worth reviewing the docs on these URLs. Regarding the 200 status: Unau... See more...
Hi @Devika_20  I think Azure AD-protected URL expects (OAuth 2.0 / OpenID Connect) authentication rather than Basic auth, its worth reviewing the docs on these URLs. Regarding the 200 status: Unauthenticated requests to Azure AD-protected resources usually trigger a redirection (HTTP 302) to the Microsoft login page (login.microsoftonline.com). Invoke-WebRequest might receive this 302 redirect. By default, it might try to follow it. However, the ultimate login page requires interactive user input (or specific OAuth flows), which your script isn't performing. It's also possible that the initial response before the redirect, or the response at the redirect URL itself if redirects aren't followed correctly, is interpreted as a 200 OK by Invoke-WebRequest, especially if -UseBasicParsing simplifies how responses are handled. The server isn't explicitly rejecting the credentials with a 401 because it wasn't even trying to process them via Basic Auth; it was trying to initiate the standard Azure AD login flow. The other thing to check is the logic around the 200 status and WWW-Authenticate headers, I believe the check if ($response.StatusCode -eq 200 -and $response.Headers["WWW-Authenticate"]) inside the try block is incorrect. A 200 OK response signifies success. The WWW-Authenticate header is typically sent with a 401 Unauthorized response to tell the client how to authenticate, not upon success. This condition would likely never be true in a standard scenario. Ultimately correct way to achieve this would be using OAuth You would need to modify your script to authenticate using an OAuth 2.0 flow appropriate for a non-interactive script. The Client Credentials Flow is the standard and most secure method for service-to-service or script-based authentication against Azure AD.   Steps: Azure AD App Registration: Register an application in your Azure AD tenant. Under "API permissions," grant this application the necessary permissions to access your target URL/API (e.g., if it's a custom API, grant application permissions defined by that API). Make sure to grant admin consent if required. Under "Certificates & secrets," create a new client secret. Copy this secret value immediately, as you won't be able to see it again. Note down the Application (client) ID and the Directory (tenant) ID. Modify the PowerShell Script: Replace the Basic Auth logic with OAuth 2.0 Client Credentials Flow. The following template would be a good starting point: # --- Configuration --- $clientId = "YOUR_APP_REGISTRATION_CLIENT_ID" $clientSecret = "YOUR_APP_REGISTRATION_CLIENT_SECRET" # Consider using Azure Key Vault or secure storage $tenantId = "YOUR_AZURE_AD_TENANT_ID" $targetUrl = "<TARGET_URL>" # URL to monitor # Define the scope. Often 'https://resource.example.com/.default' or 'api://<api_client_id>/.default' # For Microsoft Graph API it might be 'https://graph.microsoft.com/.default' # Check the documentation for your specific targetUrl API or Azure service $scope = "YOUR_API_SCOPE/.default" # e.g., "api://xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/.default" or resource URI + "/.default" # --- Get Access Token --- $tokenRequestBody = @{ Grant_Type = "client_credentials" Scope = $scope Client_Id = $clientId Client_Secret = $clientSecret } $tokenEndpoint = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token" try { Write-Verbose "Requesting Access Token from $tokenEndpoint" $tokenResponse = Invoke-RestMethod -Method Post -Uri $tokenEndpoint -Body $tokenRequestBody -ContentType 'application/x-www-form-urlencoded' $accessToken = $tokenResponse.access_token Write-Verbose "Successfully obtained Access Token." } catch { Write-Host "Error obtaining Access Token: $($_.Exception.Message)" # Log detailed error for Splunk Write-Host "OAuth Token Request Failed. Status Code: $($_.Exception.Response.StatusCode). Response Body: $($_.Exception.Response.GetResponseStream() | Foreach-Object { New-Object System.IO.StreamReader($_) } | Foreach-Object { $_.ReadToEnd() })" exit 1 # Exit because we cannot proceed without a token } # --- Send Request with Bearer Token --- $headers = @{ Authorization = "Bearer $accessToken" } try { Write-Verbose "Sending request to $targetUrl" # -MaximumRedirection 0 prevents following redirects which might mask the initial auth status $response = Invoke-WebRequest -Uri $targetUrl -Headers $headers -Method Get -UseBasicParsing -ErrorAction Stop -MaximumRedirection 0 # Successful request (usually 2xx) # Log success for Splunk Write-Host "Response Code: $($response.StatusCode)" Write-Host "Monitoring check successful for $targetUrl" } catch { # Handle HTTP errors (like 401 Unauthorized, 403 Forbidden, etc.) $statusCode = $_.Exception.Response.StatusCode $statusDescription = $_.Exception.Response.StatusDescription # Log failure for Splunk Write-Host "Response Code: $statusCode" Write-Host "Status Description: $statusDescription" Write-Host "Monitoring check failed for $targetUrl. Error: $($_.Exception.Message)" # You can add specific logging/alerting for critical codes like 401/403 if ($statusCode -eq [System.Net.HttpStatusCode]::Unauthorized -or $statusCode -eq [System.Net.HttpStatusCode]::Forbidden) { Write-Host "CRITICAL: Authentication or Authorization failed ($statusCode)." # Add specific Splunk logging for auth failure here } # Optional: Log response body for debugging, be careful with sensitive data # $errorResponseBody = $_.Exception.Response.GetResponseStream() | Foreach-Object { New-Object System.IO.StreamReader($_) } | Foreach-Object { $_.ReadToEnd() } # Write-Host "Error Response Body: $errorResponseBody" } Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hello, @kiran_panchavat, @PickleRick , @livehybrid  Thank you for your responses; they were very helpful for me. However, I would like to know if you happen to know why I am getting ack set to fa... See more...
Hello, @kiran_panchavat, @PickleRick , @livehybrid  Thank you for your responses; they were very helpful for me. However, I would like to know if you happen to know why I am getting ack set to false?
We are using the following PowerShell script to monitor Azure AD authentication-enabled URLs in Splunk. However, when incorrect credentials are entered, a 200 response code is returned instead of the... See more...
We are using the following PowerShell script to monitor Azure AD authentication-enabled URLs in Splunk. However, when incorrect credentials are entered, a 200 response code is returned instead of the expected failure response (e.g., 401 Unauthorized). Has anyone encountered this issue? Please help us rectify this and ensure that incorrect credentials are flagged with the appropriate response code. # Prompt User for Credentials $credential = Get-Credential   # Define Target URL $targetUrl = "<TARGET_URL>"  # URL to monitor   # Convert Credentials to Base64 for Authorization Header $username = $credential.UserName $password = $credential.GetNetworkCredential().Password $authValue = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes("$username`:$password")) $headers = @{ Authorization = "Basic $authValue" }   # Send Request with Authorization try {     $response = Invoke-WebRequest -Uri $targetUrl -Headers $headers -Method Get -UseBasicParsing -ErrorAction Stop       # Check if the server actually challenges for authentication     if ($response.StatusCode -eq 200 -and $response.Headers["WWW-Authenticate"]) {         Write-Host "Authentication failed: Invalid credentials provided."     } else {         Write-Host "Response Code: $($response.StatusCode)"     } } catch {     if ($_.Exception.Response.StatusCode -eq 401) {         Write-Host "Authentication failed: Invalid credentials provided."     } else {         Write-Host "Request failed with error: $($_.Exception.Message)"     } }
Hi @bedrocho , let me understand: you have two flows from two hosts and you want to assign a different sourcetype to each one or both the flows have both the sourcetypes? In the first case you can ... See more...
Hi @bedrocho , let me understand: you have two flows from two hosts and you want to assign a different sourcetype to each one or both the flows have both the sourcetypes? In the first case you can assign the correct sourcetype in the inputs.conf of each host and you solve the issue. if they are distributed, you should find a rule (a regex) to define sourcetype assignment, have something like this: props.conf [test_health] TRANSFORMS-sourcetype_overriding = sourcetype_overriding1,sourcetype_overriding2 transforms.conf [sourcetype_overriding1] REGEX = <regex1> DEST_KEY = Metadata:Sourcetype FORMAT = test01_health [sourcetype_overriding2] REGEX = <regex2> DEST_KEY = Metadata:Sourcetype FORMAT = test02_health Ciao. Giuseppe
Hi @Poojitha , when you create the alert, use the $row.OWNER_EMAIL$ token in the "Send to" field, remembering to separate alerts results (one alert for each results) in the alert options. Ciao. ... See more...
Hi @Poojitha , when you create the alert, use the $row.OWNER_EMAIL$ token in the "Send to" field, remembering to separate alerts results (one alert for each results) in the alert options. Ciao. Giuseppe
Hi @dtapia , as you can ead at https://docs.splunk.com/Documentation/Splunk/9.4.1/SearchReference/Stats you cannot use the span option in the stats command, it's possible to use it only in the tstat... See more...
Hi @dtapia , as you can ead at https://docs.splunk.com/Documentation/Splunk/9.4.1/SearchReference/Stats you cannot use the span option in the stats command, it's possible to use it only in the tstats or timechart commands but not in stats. In this case, you have to add a ne command bin or bucket before the ststs command, something like this: index=transactions tipo_transaccion="Retiro de Efectivo" (emisor="VISA" AND tipo_cuenta="Crédito") | eval is_authorized=if(codigo_respuesta=="00" OR codigo_respuesta=="000", 1, 0) | eval is_declined=if(is_authorized==0 AND (codigo_respuesta!="91" AND codigo_respuesta!="68" AND codigo_respuesta!="timeout"), 1, 0) | eval is_timeout=if(codigo_respuesta=="91" OR codigo_respuesta=="68" OR codigo_respuesta=="timeout", 1, 0) | bin span=1m _time | stats count as total_txn, sum(is_authorized) as authorized_txn, sum(is_declined) as declined_txn, sum(is_timeout) as timeout_txn, sum(eval(is_authorized*importe)) as authorized_amount, sum(eval(is_declined*importe)) as declined_amount, sum(eval(is_timeout*importe)) as timeout_amount by _time  Ciao. Giuseppe
Help: when i try to run the following a get Error in 'stats' command: The argument 'span=1min' is invalid. index=transactions tipo_transaccion="Retiro de Efectivo" (emisor="VISA" AND tipo_c... See more...
Help: when i try to run the following a get Error in 'stats' command: The argument 'span=1min' is invalid. index=transactions tipo_transaccion="Retiro de Efectivo" (emisor="VISA" AND tipo_cuenta="Crédito") | eval is_authorized=if(codigo_respuesta=="00" OR codigo_respuesta=="000", 1, 0) | eval is_declined=if(is_authorized==0 AND (codigo_respuesta!="91" AND codigo_respuesta!="68" AND codigo_respuesta!="timeout"), 1, 0) | eval is_timeout=if(codigo_respuesta=="91" OR codigo_respuesta=="68" OR codigo_respuesta=="timeout", 1, 0) | stats count as total_txn, sum(is_authorized) as authorized_txn, sum(is_declined) as declined_txn, sum(is_timeout) as timeout_txn, sum(eval(is_authorized*importe)) as authorized_amount, sum(eval(is_declined*importe)) as declined_amount, sum(eval(is_timeout*importe)) as timeout_amount by _time span="1min" Please your support I really don't know what is causing the mistake   regards Dtapia
We created Splunk Token and added in SingulrAI environment along with splunk endpoint deatils(Site URL and Splunk management port) to send logs. However, Singulr AI was unable to pick up Splunk logs ... See more...
We created Splunk Token and added in SingulrAI environment along with splunk endpoint deatils(Site URL and Splunk management port) to send logs. However, Singulr AI was unable to pick up Splunk logs due to connectivity or network timeout issues. Singulr AI support mentioned they are seeing connectivity / network timeout issues with the provided splunk domain + port from the Singulr collector (deployed in our organization's environment). What is the reason?
@sylee It seems like issue is with JDBC drivers.
@syleeI noticed that the screenshot contains a visible username and password. I highly recommend avoiding sharing sensitive information in public or community channels. For further assistance, please... See more...
@syleeI noticed that the screenshot contains a visible username and password. I highly recommend avoiding sharing sensitive information in public or community channels. For further assistance, please raise a support ticket with Splunk so they can investigate this securely.
Thanks for your help  I tested if JDBC has problem or not but,  This custom collector successfully retrieves the schema list using same JDBC  Is there a problem with the jdbc driver?  
.
@sylee This error solidifies that the issue is with the Tibero JDBC driver (tibero6-jdbc-14.jar). Ensure that the JDBC driver for your database is correctly installed and compatible with Splunk DB C... See more...
@sylee This error solidifies that the issue is with the Tibero JDBC driver (tibero6-jdbc-14.jar). Ensure that the JDBC driver for your database is correctly installed and compatible with Splunk DB Connect.  
Hi All, I have a lookup that contains set of email ids and associated accounts. Example :  Account ID OWNER_EMAIL 34234234 test1@gmail.com; test2@gmail.com 123234234 tes... See more...
Hi All, I have a lookup that contains set of email ids and associated accounts. Example :  Account ID OWNER_EMAIL 34234234 test1@gmail.com; test2@gmail.com 123234234 test3@gmail.com;test4@gmail.com <logic> | eval email_list = split(OWNER_EMAIL, ";") | stats values(email_list) as email_list values(ENVIRONMENT) as ENVIRONMENT values(category) as EVENT_CATEGORY values(EVENT_TYPE) as EVENT_TYPE values(REGION) as Region values(AFFECTED_RESOURCE_ARNS) as AFFECTED_RESOURCE_ARNS. I have configured $result.email_list$ in alert action - email.to setting. Email is getting sent successfully but all of the result together is sent to email recepient. Result : Account ID  Email_list Environment Category Type Region Arns Description 34234234 test1@gmail.com; test2@gmail.com Development test_cat1 Event1 global testarn1 testdescr1 123234234 test3@gmail.com;test4@gmail.com Production test_cat2 Event2 global testarn2 testdescr2 When alert is triggered, separate email should go to test1@gmail.com; test2@gmail.com with both of them in to field  with email body containing only first row and another email should go to test3@gmail.com;test4@gmail.com with  both of them in to field with email body containing only second row. Please help how to achieve this. Regards, PNV
Thanks for your support First of all, Share error message on INFO mode,  I am going to change mode from INFO to DEBUG, then find and share error message  file path : $SPLUNK_HOME$/var/.../splunk_a... See more...
Thanks for your support First of all, Share error message on INFO mode,  I am going to change mode from INFO to DEBUG, then find and share error message  file path : $SPLUNK_HOME$/var/.../splunk_app_db_connect_server.conf 00 [dw-188176 - GET /api/connections/new_mdm/metadata/schemas?catalog=] ERROR io.dropwizard.jersey.errors.LoggingExceptionMapper - Error handling a request 0455f2250e391159 java.lang.AbstractMethodError: Method com/tmax/tibero/jdbc/TbDatabaseMetaData.getSchemas(Ljava/lang/String;Ljava/lang/String;)Ljava/sql/ResultSet; is abstract
thanks @kiran_panchavat  I was pretty sure its technically possible, but I'd be surprised if I were the first person trying to use Splunk to check an environment for applicable CVEs. So, kinds hopin... See more...
thanks @kiran_panchavat  I was pretty sure its technically possible, but I'd be surprised if I were the first person trying to use Splunk to check an environment for applicable CVEs. So, kinds hoping I am not reinventing the wheel.