All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you @kiran_panchavat  that at least gives me something to investigate further, but also confusing. Health Check is complaining about: One or more defined connections require the corresponding ... See more...
Thank you @kiran_panchavat  that at least gives me something to investigate further, but also confusing. Health Check is complaining about: One or more defined connections require the corresponding JDBC driver.   However, those JDBC drivers comes from the Splunk_JDBC_mysql add on app, which I checked and it's running with the latest version. Confusing.
@dbray_sd  Did you perform the health check after upgrading to the latest version of DB Connect? https://docs.splunk.com/Documentation/DBX/latest/DeployDBX/CheckInstallationHealth   
Tried same approach but nothing is coming under "Statistics" ,  when i am not checking any condition then i am getting below record,  Now if you relate my question with below then you can understand ... See more...
Tried same approach but nothing is coming under "Statistics" ,  when i am not checking any condition then i am getting below record,  Now if you relate my question with below then you can understand that under 5-key inboundSsoType deep link is coming in response so i just want to replace 5-key string to that deep link.   Below is JSON from where i am trying to check condition.  message: { [-] backendCalls: [ [+] ] deviceInfo: { [+] } elapsedTime: 210 exceptionList: [ [+] ] incomingRequest: { [-] deepLink: https://member.uhc.com hsidSSOParameters: { [+] } inboundSsoType: 5-KEY
+1 on that question. Splunk architectural component is called Deployment Server, not deployment manager. And it doesn't quarantine anything. Quarantine can happen in various other situations though b... See more...
+1 on that question. Splunk architectural component is called Deployment Server, not deployment manager. And it doesn't quarantine anything. Quarantine can happen in various other situations though but they have nothing to do with DS. So what and where is quarantined in your setup?
I am using StatsD to send metrics to a receiver, but I am encountering an issue where timing metrics (|ms) are not being captured, even though counter metrics (|c) work fine on Splunk Observability... See more...
I am using StatsD to send metrics to a receiver, but I am encountering an issue where timing metrics (|ms) are not being captured, even though counter metrics (|c) work fine on Splunk Observability Cloud.   Example of Working Metric: The following command works and is processed correctly by the StatsD receiver:   echo "test_Latency:42|c|#key:val" | nc -u -w1 localhost 8127   Example of Non-Working Metric: However, this command does not result in any output or processing:   echo "test_Latency:0.082231|ms" | nc -u -w1 localhost 8127   Current StatsD Configuration: Here is the configuration I am using for the receiver by following the doc: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/statsdreceiver   receivers: statsd: endpoint: "localhost:8127" aggregation_interval: 30s enable_metric_type: true is_monotonic_counter: false timer_histogram_mapping: - statsd_type: "histogram" observer_type: "gauge" - statsd_type: "timing" observer_type: "histogram" histogram: max_size: 100 - statsd_type: "distribution" observer_type: "summary" summary: percentiles: [0, 10, 50, 90, 95, 100]    Why are timing metrics (|ms) not being captured while counters (|c) are working, can you please help to check on it as the statsdreceiver github document says it supports "timer" related metrics https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/statsdreceiver/README.md#timer Any help or suggestions would be greatly appreciated. Thank You.
After upgrading Splunk to 9.4.0 and Splunk DB Connect to 3.18.1, all INPUTS have the error: Checkpoint not found. The input in rising mode is expected to contain a checkpoint.   None of them are p... See more...
After upgrading Splunk to 9.4.0 and Splunk DB Connect to 3.18.1, all INPUTS have the error: Checkpoint not found. The input in rising mode is expected to contain a checkpoint.   None of them are pulling in data. Looking over the logs, I see:   2025-01-10 12:16:00.298 +0000 Trace-Id=1d3654ac-86c1-445f-97c6-6919b3f6eb8c [Scheduled-Job-Executor-116] ERROR org.easybatch.core.job.BatchJob - Unable to open record reader com.splunk.dbx.server.exception.ReadCheckpointFailException: Error(s) occur when reading checkpoint. at com.splunk.dbx.server.dbinput.task.DbInputCheckpointManager.load(DbInputCheckpointManager.java:71) at com.splunk.dbx.server.dbinput.task.DbInputTask.loadCheckpoint(DbInputTask.java:133) at com.splunk.dbx.server.dbinput.recordreader.DbInputRecordReader.executeQuery(DbInputRecordReader.java:82) at com.splunk.dbx.server.dbinput.recordreader.DbInputRecordReader.open(DbInputRecordReader.java:55) at org.easybatch.core.job.BatchJob.openReader(BatchJob.java:140) at org.easybatch.core.job.BatchJob.call(BatchJob.java:97) at com.splunk.dbx.server.api.service.conf.impl.InputServiceImpl.runTask(InputServiceImpl.java:321) at com.splunk.dbx.server.api.resource.InputResource.lambda$runInput$1(InputResource.java:183) at com.splunk.dbx.logging.MdcTaskDecorator.run(MdcTaskDecorator.java:23) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833)   I'm unable to Edit the config, and update the Check point value. Even thought the Execute Query works, when I try to save the update it gives: Error(s) occur when reading checkpoint.   Has anybody else successfully upgraded to 9.4.0 and 3.18.1?
I might have an update for this one as I was after the same thing as the original question suggest and I did not want to use REST for this. You might want to try following if that works for you.    ... See more...
I might have an update for this one as I was after the same thing as the original question suggest and I did not want to use REST for this. You might want to try following if that works for you.    index=_audit sourcetype=audittrail (action=edit_roles_grantable OR action=edit_role) (TERM(object) OR TERM(role)) (operation=create OR operation=edit OR action=edit_role) info=granted   Basically this search will find 2 types of logs within _audit index. First is  "edit_roles_grantable" which should be logged any time when someone edit role (create counts as edit too). Second is "edit_role" which will also show what was changed (this part is not perfect as I was able to see what capability was changed, but I could not find changes regards to what index can the role search). Anyway you can play around with the search and get what you need in some cases.  
What you are meaning with this “Linux UF will get quarantined by the deployment manager:8089”?
It seems that your target is SCP environment. Are you using SCP’s Universal Forwarder package from SCP? Based on those server names you have something else than AWS Victoria experience in use or other... See more...
It seems that your target is SCP environment. Are you using SCP’s Universal Forwarder package from SCP? Based on those server names you have something else than AWS Victoria experience in use or otherwise you have wrong outputs.conf in use.
How about those configuration files? This was connection to management port like 8089? Are you trying to use self signed certificates for all needed ports (web, mgmt, s2s etc.)?
Hello, I ran the following command: curl -vk https://host:port and received this : *   Trying host:port... * Connected to host (host) port port (#0) * ALPN, offering h2 * ALPN, offering htt... See more...
Hello, I ran the following command: curl -vk https://host:port and received this : *   Trying host:port... * Connected to host (host) port port (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * TLSv1.0 (OUT), TLS header, Certificate Status (22): * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.2 (IN), TLS header, Certificate Status (22): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.2 (IN), TLS header, Certificate Status (22): * TLSv1.2 (IN), TLS handshake, Certificate (11): * TLSv1.2 (IN), TLS header, Certificate Status (22): * TLSv1.2 (IN), TLS handshake, Server key exchange (12): * TLSv1.2 (IN), TLS header, Certificate Status (22): * TLSv1.2 (IN), TLS handshake, Server finished (14): * TLSv1.2 (OUT), TLS header, Certificate Status (22): * TLSv1.2 (OUT), TLS handshake, Client key exchange (16): * TLSv1.2 (OUT), TLS header, Finished (20): * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.2 (OUT), TLS header, Certificate Status (22): * TLSv1.2 (OUT), TLS handshake, Finished (20): * TLSv1.2 (IN), TLS header, Finished (20): * TLSv1.2 (IN), TLS header, Certificate Status (22): * TLSv1.2 (IN), TLS handshake, Finished (20): * SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384 * ALPN, server did not agree to a protocol * Server certificate: *  subject: CN=*.example.com *  start date: Feb 19 15:15:40 2024 GMT *  expire date: Jan 19 14:02:43 2025 GMT *  issuer: C=*; ST=*; L=*; O=SSL Corporation; CN= *  SSL certificate verify result: self-signed certificate in certificate chain (19), continuing anyway. * TLSv1.2 (OUT), TLS header, Supplemental data (23): > GET / HTTP/1.1 > Host: host:port > User-Agent: curl/7.81.0 > Accept: */* > * TLSv1.2 (IN), TLS header, Supplemental data (23): * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Date: Sun, 6 Jan 2025 08:30:21 GMT < Content-Type: text/xml; charset=UTF-8 < X-Content-Type-Options: nosniff < Content-Length: 1994 < Connection: Keep-Alive < X-Frame-Options: SAMEORIGIN < Server: Splunkd < * TLSv1.2 (IN), TLS header, Supplemental data (23): <?xml version="1.0" encoding="UTF-8"?> <!--This is to override browser formatting; see server.conf[httpServer] to disable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .--> <?xml-stylesheet type="text/xml" href="/static/atom.xsl"?>   <title>splunkd</title>   <updated>2025-01-06T09:30:21+01:00</updated>   <generator build="d8bb32809498" version="9.3.2"/>   <author>     <name>Splunk</name>   </author>   <entry>     <title>services</title>     <updated>1970-01-01T01:00:00+01:00</updated>     <link href="/services" rel="alternate"/>   </entry>   <entry>     <title>servicesNS</title>     <updated>1970-01-01T01:00:00+01:00</updated>     <link href="/servicesNS" rel="alternate"/>   </entry>   <entry>     <title>static</title>          <updated>1970-01-01T01:00:00+01:00</updated>     <link href="/static" rel="alternate"/>   </entry> </feed> * Connection #0 to host host left intact   For security reasons some fields have been removed/changed.
Hi @emlin_charly  First one worked. Thanks
Hi Luke, I am facing the same issue for many application URL's I am getting the response_code value as empty as this (request_time="") is there a way we can fix it as this app is not supported by ... See more...
Hi Luke, I am facing the same issue for many application URL's I am getting the response_code value as empty as this (request_time="") is there a way we can fix it as this app is not supported by Splunk and investigated completely port are open there is no connectivity or network issue causing this cause. Since app is giving empty value for response_code can you suggest or help to fix this. Also as per suggestion Modifying in the value of " timeout" in the "def __init__(self, timeout=30):" which is the python script "web_ping.py". is not resolved this issue in any way could you please help on this to resolve.
Wait. The if() function does not work as your typical programmatic if statement. Normally in programming the if syntax is kinda like this - if (something) then (do something) else (do something else... See more...
Wait. The if() function does not work as your typical programmatic if statement. Normally in programming the if syntax is kinda like this - if (something) then (do something) else (do something else). But in Splunk it's not about _doing_ something. It's a function which yields values. Notice that the if() is a right-value to an assignment in an eval statement. So with | eval a=if(conditon,b,c) You're telling Splunk to assign a value of b or c (depending on the result of the condition) to the field a. There is nothing else you can "do" here. You're just returning the value b or c from the if() function. In its core it's very similar to the ternary operator used in C programming language: a = (condition) ? b : c; This is all about _returning a value_ which might turn out to be one or the other. From your syntax I suspect you might be trying to something normally done with the case() function - return a value if a specific condition from a given set of conditions is met. So if instead of if() you did | eval ssoType = case(message.incomingRequest.inboundSsoType == "5-KEY", message.incomingRequest.deepLink, message.incomingRequest.inboundSsoType== "HYBRID", message.incomingRequest.inboundSsoType) your ssoType will get assigned the value of message.incomingRequest.deepLink field if the inboundSsoType equals "5-KEY". And if it doesn't but the inboundSsoType equals "HYBRID" (technically in this case they can't both be true of course but it's worth remembering that case() returns value for the first condition it matches) then ssoType will get assigned the value of message.incomingRequest.inboundSsoType field. (effectively the "HYBRID" string since we're matching on this). Is this what you're trying to do?
network is no issue, because if I try cli result is show  
I am trying to set message.incomingRequest.deepLink values  if(message.incomingRequest.inboundSsoType == "5-KEY", message.incomingRequest.inboundSsoType== "HYBRID" then set  message.incomingRequest.... See more...
I am trying to set message.incomingRequest.deepLink values  if(message.incomingRequest.inboundSsoType == "5-KEY", message.incomingRequest.inboundSsoType== "HYBRID" then set  message.incomingRequest.inboundSsoType itself  and under as Count by i am adding ssoType so that whatever result we have under same ssoType variable will get in my count.   index="uhcportals-prod-logs" sourcetype=kubernetes container_name="myuhc-sso" logger="com.uhg.myuhc.log.SplunkLog" message.ssoType="Inbound" | eval ssoType = if(message.incomingRequest.inboundSsoType == "5-KEY", message.incomingRequest.deepLink, message.incomingRequest.inboundSsoType== "HYBRID", message.incomingRequest.inboundSsoType) | stats distinct_count("message.ssoAttributes.EEID") as Count by ssoType, "message.backendCalls{}.responseCode"
Please provide sample data to help you with the search query
If the owner of the scheduled and embedded report is deleted in Splunk the report is not executed anymore until the report is reassigned to a user who has all required capabilities to run and embed t... See more...
If the owner of the scheduled and embedded report is deleted in Splunk the report is not executed anymore until the report is reassigned to a user who has all required capabilities to run and embed the report. And I'm pretty sure that the link for the embedding must be recreated.   To prevent this behavior, you could create a technical (local) user.
Your syntax is indeed wrong. The if() function requires two or three paramters: 1. Conditional expression evaluating to a boolean value 2. A value to be assigned if the expression from p.1 evaluate... See more...
Your syntax is indeed wrong. The if() function requires two or three paramters: 1. Conditional expression evaluating to a boolean value 2. A value to be assigned if the expression from p.1 evaluates to true 3. Optionally a valur to be assigned if p.1 yields false (if not provided, empty value will be assigned). You have a condition in p.1 but your p.2 is also a condition (which, when evaluated will yield a boolean value), not a normal value. Splunk doesn't let you assign boolean value to a field, it can only be used for conditional statements. It's not clear what you're trying to do. If it's supposed to be additional condition for your if(, you must create a composite condition as your first parameter in the if() function. If you're trying to assign two different fields using a single eval statement and a single if() function - you can't do that.
Trying to check and set values conditionally but below query is giving error Error :- Error in 'eval' command: Fields cannot be assigned a boolean result. Instead, try if([bool expr], [expr], [e... See more...
Trying to check and set values conditionally but below query is giving error Error :- Error in 'eval' command: Fields cannot be assigned a boolean result. Instead, try if([bool expr], [expr], [expr]). The search job has failed due to an error. You may be able view the job in the    Query :- index="uhcportals-prod-logs" sourcetype=kubernetes container_name="myuhc-sso" logger="com.uhg.myuhc.log.SplunkLog" message.ssoType="Inbound" | eval ssoType = if(message.incomingRequest.inboundSsoType == "5-KEY", message.incomingRequest.deepLink, message.incomingRequest.inboundSsoType== "HYBRID", message.incomingRequest.inboundSsoType) | stats distinct_count("message.ssoAttributes.EEID") as Count by ssoType, "message.backendCalls{}.responseCode"