All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, I ran the following command: curl -vk https://host:port and received this : *   Trying host:port... * Connected to host (host) port port (#0) * ALPN, offering h2 * ALPN, offering htt... See more...
Hello, I ran the following command: curl -vk https://host:port and received this : *   Trying host:port... * Connected to host (host) port port (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * TLSv1.0 (OUT), TLS header, Certificate Status (22): * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.2 (IN), TLS header, Certificate Status (22): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.2 (IN), TLS header, Certificate Status (22): * TLSv1.2 (IN), TLS handshake, Certificate (11): * TLSv1.2 (IN), TLS header, Certificate Status (22): * TLSv1.2 (IN), TLS handshake, Server key exchange (12): * TLSv1.2 (IN), TLS header, Certificate Status (22): * TLSv1.2 (IN), TLS handshake, Server finished (14): * TLSv1.2 (OUT), TLS header, Certificate Status (22): * TLSv1.2 (OUT), TLS handshake, Client key exchange (16): * TLSv1.2 (OUT), TLS header, Finished (20): * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.2 (OUT), TLS header, Certificate Status (22): * TLSv1.2 (OUT), TLS handshake, Finished (20): * TLSv1.2 (IN), TLS header, Finished (20): * TLSv1.2 (IN), TLS header, Certificate Status (22): * TLSv1.2 (IN), TLS handshake, Finished (20): * SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384 * ALPN, server did not agree to a protocol * Server certificate: *  subject: CN=*.example.com *  start date: Feb 19 15:15:40 2024 GMT *  expire date: Jan 19 14:02:43 2025 GMT *  issuer: C=*; ST=*; L=*; O=SSL Corporation; CN= *  SSL certificate verify result: self-signed certificate in certificate chain (19), continuing anyway. * TLSv1.2 (OUT), TLS header, Supplemental data (23): > GET / HTTP/1.1 > Host: host:port > User-Agent: curl/7.81.0 > Accept: */* > * TLSv1.2 (IN), TLS header, Supplemental data (23): * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Date: Sun, 6 Jan 2025 08:30:21 GMT < Content-Type: text/xml; charset=UTF-8 < X-Content-Type-Options: nosniff < Content-Length: 1994 < Connection: Keep-Alive < X-Frame-Options: SAMEORIGIN < Server: Splunkd < * TLSv1.2 (IN), TLS header, Supplemental data (23): <?xml version="1.0" encoding="UTF-8"?> <!--This is to override browser formatting; see server.conf[httpServer] to disable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .--> <?xml-stylesheet type="text/xml" href="/static/atom.xsl"?>   <title>splunkd</title>   <updated>2025-01-06T09:30:21+01:00</updated>   <generator build="d8bb32809498" version="9.3.2"/>   <author>     <name>Splunk</name>   </author>   <entry>     <title>services</title>     <updated>1970-01-01T01:00:00+01:00</updated>     <link href="/services" rel="alternate"/>   </entry>   <entry>     <title>servicesNS</title>     <updated>1970-01-01T01:00:00+01:00</updated>     <link href="/servicesNS" rel="alternate"/>   </entry>   <entry>     <title>static</title>          <updated>1970-01-01T01:00:00+01:00</updated>     <link href="/static" rel="alternate"/>   </entry> </feed> * Connection #0 to host host left intact   For security reasons some fields have been removed/changed.
Hi @emlin_charly  First one worked. Thanks
Hi Luke, I am facing the same issue for many application URL's I am getting the response_code value as empty as this (request_time="") is there a way we can fix it as this app is not supported by ... See more...
Hi Luke, I am facing the same issue for many application URL's I am getting the response_code value as empty as this (request_time="") is there a way we can fix it as this app is not supported by Splunk and investigated completely port are open there is no connectivity or network issue causing this cause. Since app is giving empty value for response_code can you suggest or help to fix this. Also as per suggestion Modifying in the value of " timeout" in the "def __init__(self, timeout=30):" which is the python script "web_ping.py". is not resolved this issue in any way could you please help on this to resolve.
Wait. The if() function does not work as your typical programmatic if statement. Normally in programming the if syntax is kinda like this - if (something) then (do something) else (do something else... See more...
Wait. The if() function does not work as your typical programmatic if statement. Normally in programming the if syntax is kinda like this - if (something) then (do something) else (do something else). But in Splunk it's not about _doing_ something. It's a function which yields values. Notice that the if() is a right-value to an assignment in an eval statement. So with | eval a=if(conditon,b,c) You're telling Splunk to assign a value of b or c (depending on the result of the condition) to the field a. There is nothing else you can "do" here. You're just returning the value b or c from the if() function. In its core it's very similar to the ternary operator used in C programming language: a = (condition) ? b : c; This is all about _returning a value_ which might turn out to be one or the other. From your syntax I suspect you might be trying to something normally done with the case() function - return a value if a specific condition from a given set of conditions is met. So if instead of if() you did | eval ssoType = case(message.incomingRequest.inboundSsoType == "5-KEY", message.incomingRequest.deepLink, message.incomingRequest.inboundSsoType== "HYBRID", message.incomingRequest.inboundSsoType) your ssoType will get assigned the value of message.incomingRequest.deepLink field if the inboundSsoType equals "5-KEY". And if it doesn't but the inboundSsoType equals "HYBRID" (technically in this case they can't both be true of course but it's worth remembering that case() returns value for the first condition it matches) then ssoType will get assigned the value of message.incomingRequest.inboundSsoType field. (effectively the "HYBRID" string since we're matching on this). Is this what you're trying to do?
network is no issue, because if I try cli result is show  
I am trying to set message.incomingRequest.deepLink values  if(message.incomingRequest.inboundSsoType == "5-KEY", message.incomingRequest.inboundSsoType== "HYBRID" then set  message.incomingRequest.... See more...
I am trying to set message.incomingRequest.deepLink values  if(message.incomingRequest.inboundSsoType == "5-KEY", message.incomingRequest.inboundSsoType== "HYBRID" then set  message.incomingRequest.inboundSsoType itself  and under as Count by i am adding ssoType so that whatever result we have under same ssoType variable will get in my count.   index="uhcportals-prod-logs" sourcetype=kubernetes container_name="myuhc-sso" logger="com.uhg.myuhc.log.SplunkLog" message.ssoType="Inbound" | eval ssoType = if(message.incomingRequest.inboundSsoType == "5-KEY", message.incomingRequest.deepLink, message.incomingRequest.inboundSsoType== "HYBRID", message.incomingRequest.inboundSsoType) | stats distinct_count("message.ssoAttributes.EEID") as Count by ssoType, "message.backendCalls{}.responseCode"
Please provide sample data to help you with the search query
If the owner of the scheduled and embedded report is deleted in Splunk the report is not executed anymore until the report is reassigned to a user who has all required capabilities to run and embed t... See more...
If the owner of the scheduled and embedded report is deleted in Splunk the report is not executed anymore until the report is reassigned to a user who has all required capabilities to run and embed the report. And I'm pretty sure that the link for the embedding must be recreated.   To prevent this behavior, you could create a technical (local) user.
Your syntax is indeed wrong. The if() function requires two or three paramters: 1. Conditional expression evaluating to a boolean value 2. A value to be assigned if the expression from p.1 evaluate... See more...
Your syntax is indeed wrong. The if() function requires two or three paramters: 1. Conditional expression evaluating to a boolean value 2. A value to be assigned if the expression from p.1 evaluates to true 3. Optionally a valur to be assigned if p.1 yields false (if not provided, empty value will be assigned). You have a condition in p.1 but your p.2 is also a condition (which, when evaluated will yield a boolean value), not a normal value. Splunk doesn't let you assign boolean value to a field, it can only be used for conditional statements. It's not clear what you're trying to do. If it's supposed to be additional condition for your if(, you must create a composite condition as your first parameter in the if() function. If you're trying to assign two different fields using a single eval statement and a single if() function - you can't do that.
Trying to check and set values conditionally but below query is giving error Error :- Error in 'eval' command: Fields cannot be assigned a boolean result. Instead, try if([bool expr], [expr], [e... See more...
Trying to check and set values conditionally but below query is giving error Error :- Error in 'eval' command: Fields cannot be assigned a boolean result. Instead, try if([bool expr], [expr], [expr]). The search job has failed due to an error. You may be able view the job in the    Query :- index="uhcportals-prod-logs" sourcetype=kubernetes container_name="myuhc-sso" logger="com.uhg.myuhc.log.SplunkLog" message.ssoType="Inbound" | eval ssoType = if(message.incomingRequest.inboundSsoType == "5-KEY", message.incomingRequest.deepLink, message.incomingRequest.inboundSsoType== "HYBRID", message.incomingRequest.inboundSsoType) | stats distinct_count("message.ssoAttributes.EEID") as Count by ssoType, "message.backendCalls{}.responseCode"
@MrBLeu  If SSL is being used, ... To do an openssl test like openssl s_client -connect xx.xx.xx.xx:9997 -cert <cert_file> -CAfile <ca_file> You can get <ca_file> from running this: /opt/splunk/... See more...
@MrBLeu  If SSL is being used, ... To do an openssl test like openssl s_client -connect xx.xx.xx.xx:9997 -cert <cert_file> -CAfile <ca_file> You can get <ca_file> from running this: /opt/splunk/bin/splunk cmd btool server list sslConfig | grep sslRootCAPath <cert_file> you can get from running this: /opt/splunk/bin/splunk cmd btool outputs list tcpout You are looking for the clientCert setting. If you have multiple entries for clientCert, such as one under [tcpout] and one under [tcpout:<group>], pick the one on the latter, which would be at the more specific level. You'll be able to see if ssl handshake is completing properly with the settings currently configured.
Hi @MrBLeu , at first, check if your UFs send data or not and check what are the Indexers receivers. Then check all the connections from the UFs to the Indexers, maybe there are some closed connect... See more...
Hi @MrBLeu , at first, check if your UFs send data or not and check what are the Indexers receivers. Then check all the connections from the UFs to the Indexers, maybe there are some closed connections. Then are you using an SSL certificate? if yes, check the validiti and the password of your certificate and that the certificate is used bonth on UFs and IDXs. Ciao, Giuseppe
Hi @MrBLeu , from your description I see that you configured your UF to send logs (using outputs.conf) and I suppose that you configured Indexer to receive logs. If not go in [Settings > Forwarding... See more...
Hi @MrBLeu , from your description I see that you configured your UF to send logs (using outputs.conf) and I suppose that you configured Indexer to receive logs. If not go in [Settings > Forwarding and Receiving > Forwarding ] and configure the receiving port to use in the UF in outputs.conf. Then, did your connection work anytime or not? If never, check the connection using telnet from the UF to the IDX using the receivig port (by default 9997) telnet <ip_IDX> 9997  Ciao. Giuseppe
Hello @bishida, I have a follow-up query. Suppose I have a use case where multiple instances are running, generating application logs, and using the Splunk Universal Forwarder for log forwarding. Si... See more...
Hello @bishida, I have a follow-up query. Suppose I have a use case where multiple instances are running, generating application logs, and using the Splunk Universal Forwarder for log forwarding. Since these instances can autoscale at any time, I am leveraging the Splunk Cloud Platform to collect and centralize the application logs, ensuring that no logs are lost. Additionally, I am utilizing the Splunk Observability Cloud to visualize the data by connecting it to the Splunk Cloud Platform. My question is: Where are the actual application logs stored? Are they stored on the Splunk Cloud Platform, the Splunk Observability Cloud, or both? Thank you!
@splunkville It could be many things - usually I see this when queues are blocked. Could you please paste the ERROR here.  I hope this helps, if any reply helps you, you could add your upvote/karma ... See more...
@splunkville It could be many things - usually I see this when queues are blocked. Could you please paste the ERROR here.  I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
@MrBLeu   Did you check your resource usage ? What about network connections ? Check the _internal logs on your server I hope this helps, if any reply helps you, you could add your upvote/karma p... See more...
@MrBLeu   Did you check your resource usage ? What about network connections ? Check the _internal logs on your server I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
@MrBLeu  Hey,  The servers configured in outputs.conf are not performing well. there could be many reasons: - From the remote server, make sure you can reach the port on the indexer. Telnet or somet... See more...
@MrBLeu  Hey,  The servers configured in outputs.conf are not performing well. there could be many reasons: - From the remote server, make sure you can reach the port on the indexer. Telnet or something - Review the Splunkd logs on the windows server, grepping for the indexer ip - Make sure it's listening on 9997, ss -l | grep 9997 - Check the logs on the Universal forwarder $SPLUNK_HOME/var/log/splunk/splunkd.log - network issue from Universal forwarder to Indexer - Indexers are overwhelmed with events coming in or busy in serving requests from search head. - check all servers (indexers) in outputs.conf of forwarder are healthy (CPU and memory utilization). - Check if you have deployed outputs.conf to indexers by mistake. generally indexers don't have outputs.conf. I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
@MikeMakai  Hi Mike,I recently integrated an FTD appliance with Splunk. Previously, the customer was using a Cisco ASA, and last week they upgraded to FTD. We didn’t make any changes to the Splunk se... See more...
@MikeMakai  Hi Mike,I recently integrated an FTD appliance with Splunk. Previously, the customer was using a Cisco ASA, and last week they upgraded to FTD. We didn’t make any changes to the Splunk setup and are still using the Cisco ASA add-on. Interestingly, the logs are being parsed correctly. Have you tried using the Cisco ASA add-on? Additionally, when you run a TCP dump on the destination side (Splunk), how are the logs appearing from the FTD device? Are they coming through as expected? It seems the cisco:ftd:syslog sourcetype isn’t parsing them properly. I’ve attached a screenshot for your reference. I hope this helps. if any reply helps you, you could add your upvote/karma points to that reply.  
01-09-2025 17:01:37.725 -0500 WARN  TcpOutputProc [4940 parsing] - The TCP output processor has paused the data flow. Forwarding to host_dest=sbdcrib.splunkcloud.com inside output group default-autol... See more...
01-09-2025 17:01:37.725 -0500 WARN  TcpOutputProc [4940 parsing] - The TCP output processor has paused the data flow. Forwarding to host_dest=sbdcrib.splunkcloud.com inside output group default-autolb-group from host_src=CRBCITDHCP-01 has been blocked for blocked_seconds=1800. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.
01-09-2025 17:30:30.169 -0500 INFO  PeriodicHealthReporter - feature="TCPOutAutoLB-0" color=red indicator="s2s_connections" due_to_threshold_value=70 measured_value=100 reason="More than 70% of forwa... See more...
01-09-2025 17:30:30.169 -0500 INFO  PeriodicHealthReporter - feature="TCPOutAutoLB-0" color=red indicator="s2s_connections" due_to_threshold_value=70 measured_value=100 reason="More than 70% of forwarding destinations have failed.  Ensure your hosts and ports in outputs.conf are correct.  Also ensure that the indexers are all running, and that any SSL certificates being used for forwarding are correct." node_type=indicator node_path=splunkd.data_forwarding.splunk-2-splunk_forwarding.tcpoutautolb-0.s2s_connections