Trying to check and set values conditionally but below query is giving error Error :- Error in 'eval' command: Fields cannot be assigned a boolean result. Instead, try if([bool expr], [expr], [e...
See more...
Trying to check and set values conditionally but below query is giving error Error :- Error in 'eval' command: Fields cannot be assigned a boolean result. Instead, try if([bool expr], [expr], [expr]). The search job has failed due to an error. You may be able view the job in the Query :- index="uhcportals-prod-logs" sourcetype=kubernetes container_name="myuhc-sso" logger="com.uhg.myuhc.log.SplunkLog" message.ssoType="Inbound" | eval ssoType = if(message.incomingRequest.inboundSsoType == "5-KEY", message.incomingRequest.deepLink, message.incomingRequest.inboundSsoType== "HYBRID", message.incomingRequest.inboundSsoType) | stats distinct_count("message.ssoAttributes.EEID") as Count by ssoType, "message.backendCalls{}.responseCode"
@MrBLeu If SSL is being used, ... To do an openssl test like openssl s_client -connect xx.xx.xx.xx:9997 -cert <cert_file> -CAfile <ca_file> You can get <ca_file> from running this: /opt/splunk/...
See more...
@MrBLeu If SSL is being used, ... To do an openssl test like openssl s_client -connect xx.xx.xx.xx:9997 -cert <cert_file> -CAfile <ca_file> You can get <ca_file> from running this: /opt/splunk/bin/splunk cmd btool server list sslConfig | grep sslRootCAPath <cert_file> you can get from running this: /opt/splunk/bin/splunk cmd btool outputs list tcpout You are looking for the clientCert setting. If you have multiple entries for clientCert, such as one under [tcpout] and one under [tcpout:<group>], pick the one on the latter, which would be at the more specific level. You'll be able to see if ssl handshake is completing properly with the settings currently configured.
Hi @MrBLeu , at first, check if your UFs send data or not and check what are the Indexers receivers. Then check all the connections from the UFs to the Indexers, maybe there are some closed connect...
See more...
Hi @MrBLeu , at first, check if your UFs send data or not and check what are the Indexers receivers. Then check all the connections from the UFs to the Indexers, maybe there are some closed connections. Then are you using an SSL certificate? if yes, check the validiti and the password of your certificate and that the certificate is used bonth on UFs and IDXs. Ciao, Giuseppe
Hi @MrBLeu , from your description I see that you configured your UF to send logs (using outputs.conf) and I suppose that you configured Indexer to receive logs. If not go in [Settings > Forwarding...
See more...
Hi @MrBLeu , from your description I see that you configured your UF to send logs (using outputs.conf) and I suppose that you configured Indexer to receive logs. If not go in [Settings > Forwarding and Receiving > Forwarding ] and configure the receiving port to use in the UF in outputs.conf. Then, did your connection work anytime or not? If never, check the connection using telnet from the UF to the IDX using the receivig port (by default 9997) telnet <ip_IDX> 9997 Ciao. Giuseppe
Hello @bishida, I have a follow-up query. Suppose I have a use case where multiple instances are running, generating application logs, and using the Splunk Universal Forwarder for log forwarding. Si...
See more...
Hello @bishida, I have a follow-up query. Suppose I have a use case where multiple instances are running, generating application logs, and using the Splunk Universal Forwarder for log forwarding. Since these instances can autoscale at any time, I am leveraging the Splunk Cloud Platform to collect and centralize the application logs, ensuring that no logs are lost. Additionally, I am utilizing the Splunk Observability Cloud to visualize the data by connecting it to the Splunk Cloud Platform. My question is: Where are the actual application logs stored? Are they stored on the Splunk Cloud Platform, the Splunk Observability Cloud, or both? Thank you!
@splunkville It could be many things - usually I see this when queues are blocked. Could you please paste the ERROR here. I hope this helps, if any reply helps you, you could add your upvote/karma ...
See more...
@splunkville It could be many things - usually I see this when queues are blocked. Could you please paste the ERROR here. I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
@MrBLeu Did you check your resource usage ? What about network connections ? Check the _internal logs on your server I hope this helps, if any reply helps you, you could add your upvote/karma p...
See more...
@MrBLeu Did you check your resource usage ? What about network connections ? Check the _internal logs on your server I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
@MrBLeu Hey, The servers configured in outputs.conf are not performing well. there could be many reasons: - From the remote server, make sure you can reach the port on the indexer. Telnet or somet...
See more...
@MrBLeu Hey, The servers configured in outputs.conf are not performing well. there could be many reasons: - From the remote server, make sure you can reach the port on the indexer. Telnet or something - Review the Splunkd logs on the windows server, grepping for the indexer ip - Make sure it's listening on 9997, ss -l | grep 9997 - Check the logs on the Universal forwarder $SPLUNK_HOME/var/log/splunk/splunkd.log - network issue from Universal forwarder to Indexer - Indexers are overwhelmed with events coming in or busy in serving requests from search head. - check all servers (indexers) in outputs.conf of forwarder are healthy (CPU and memory utilization). - Check if you have deployed outputs.conf to indexers by mistake. generally indexers don't have outputs.conf. I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
@MikeMakai Hi Mike,I recently integrated an FTD appliance with Splunk. Previously, the customer was using a Cisco ASA, and last week they upgraded to FTD. We didn’t make any changes to the Splunk se...
See more...
@MikeMakai Hi Mike,I recently integrated an FTD appliance with Splunk. Previously, the customer was using a Cisco ASA, and last week they upgraded to FTD. We didn’t make any changes to the Splunk setup and are still using the Cisco ASA add-on. Interestingly, the logs are being parsed correctly. Have you tried using the Cisco ASA add-on? Additionally, when you run a TCP dump on the destination side (Splunk), how are the logs appearing from the FTD device? Are they coming through as expected? It seems the cisco:ftd:syslog sourcetype isn’t parsing them properly. I’ve attached a screenshot for your reference. I hope this helps. if any reply helps you, you could add your upvote/karma points to that reply.
01-09-2025 17:01:37.725 -0500 WARN TcpOutputProc [4940 parsing] - The TCP output processor has paused the data flow. Forwarding to host_dest=sbdcrib.splunkcloud.com inside output group default-autol...
See more...
01-09-2025 17:01:37.725 -0500 WARN TcpOutputProc [4940 parsing] - The TCP output processor has paused the data flow. Forwarding to host_dest=sbdcrib.splunkcloud.com inside output group default-autolb-group from host_src=CRBCITDHCP-01 has been blocked for blocked_seconds=1800. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.
01-09-2025 17:30:30.169 -0500 INFO PeriodicHealthReporter - feature="TCPOutAutoLB-0" color=red indicator="s2s_connections" due_to_threshold_value=70 measured_value=100 reason="More than 70% of forwa...
See more...
01-09-2025 17:30:30.169 -0500 INFO PeriodicHealthReporter - feature="TCPOutAutoLB-0" color=red indicator="s2s_connections" due_to_threshold_value=70 measured_value=100 reason="More than 70% of forwarding destinations have failed. Ensure your hosts and ports in outputs.conf are correct. Also ensure that the indexers are all running, and that any SSL certificates being used for forwarding are correct." node_type=indicator node_path=splunkd.data_forwarding.splunk-2-splunk_forwarding.tcpoutautolb-0.s2s_connections
transaction is not a safe command to use if you have large data volumes, as it will silently ignore data when it hits limits. You are using a long span of 5m, so it will potentially have to hold lots...
See more...
transaction is not a safe command to use if you have large data volumes, as it will silently ignore data when it hits limits. You are using a long span of 5m, so it will potentially have to hold lots of data in memory. Secondly, if you do use transaction and are wanting to group by host, you need to supply host as a field in the transaction command. With all search debugging tasks, first find a small dataset that contains the condition you are trying to catch and then just use the transaction command to see what transactions you get - if you can post some comments or anonymised data that demonstrates what you are having trouble with, that would help. Note that it is generally possible to use stats as a replacement for transaction, but in this case, may not be applicable.
Please keep in mind that Splunk docs no longer specify support for Windows 10/11, only specifically server version. Something may have impacted the install extraction process.
This simply means that your input is sending an SNMP get or walk request but is not getting a response. There might be a multitude of possible reasons - wrong host, wrong community, wrong protocol ve...
See more...
This simply means that your input is sending an SNMP get or walk request but is not getting a response. There might be a multitude of possible reasons - wrong host, wrong community, wrong protocol version, firewall rules...
I have two log messages "%ROUTING-LDP-5-NSR_SYNC_START" and "%ROUTING-LDP-5-NBR_CHANGE" which usually accompany each other whenever there is a peer flapping. So "%ROUTING-LDP-5-NBR_CHANGE" is followe...
See more...
I have two log messages "%ROUTING-LDP-5-NSR_SYNC_START" and "%ROUTING-LDP-5-NBR_CHANGE" which usually accompany each other whenever there is a peer flapping. So "%ROUTING-LDP-5-NBR_CHANGE" is followed by "%ROUTING-LDP-5-NSR_SYNC_START" almost every time.
I am trying to find the output where a device only produces "%ROUTING-LDP-5-NSR_SYNC_START" without "%ROUTING-LDP-5-NBR_CHANGE" and I am using transaction but not been able to figure it out.
index = test ("%ROUTING-LDP-5-NSR_SYNC_START" OR "%ROUTING-LDP-5-NBR_CHANGE")
| transaction maxspan=5m startswith="%ROUTING-LDP-5-NSR_SYNC_START" endswith="%ROUTING-LDP-5-NBR_CHANGE"
| search eventcount=1 startswith="%ROUTING-LDP-5-NSR_SYNC_START"
| stats count by host
Quite often OS openssl didn't work correctly as there could be some version conflicts and missing libraries etc. if your PATH and LD_LIBRARY_PATH is incorrectly set. For that reason I always use Splu...
See more...
Quite often OS openssl didn't work correctly as there could be some version conflicts and missing libraries etc. if your PATH and LD_LIBRARY_PATH is incorrectly set. For that reason I always use Splunk's openssl version. Basically that means that you can read it, but for some reason it cannot get any real answer. Just read and response is OK (errno=0). You could also try curl -vk https://host:port to try if this get more information? I think that you have some issues with your TLS settings on your configuration. Could you tell exactly what you have tied to achieve and what you have done? Add also all those *.conf files inside </> blocks with masked **** passwords etc. Have you look this instructions: https://conf.splunk.com/files/2023/slides/SEC1936B.pdf this presentation is excellent bootcamp for use TLS with Splunk.
Hi @JohnEGones , Although an expensive solution, you can use sandbox and data diode for sending threat intel files into the air gapped network. Download the files using internet connected system...
See more...
Hi @JohnEGones , Although an expensive solution, you can use sandbox and data diode for sending threat intel files into the air gapped network. Download the files using internet connected system, after sandbox scanning you can send the file to protected network through a strictly configured one way data diode. Lastly you can update ES threat list by serving this file with a simple web server.