Splunk UF is not sending logs to Splunk. The Splunkd constitutes full of errors and warnings as below.
The telnet connection to DS and Indexers is successful at 8089 and 9997 respectively. It is a windows server and service is up and running
ERROR TcpOutputFd - Read error. An existing connection was forcibly closed by the remote host.
WARN TcpOutputProc - Applying quarantine to ip=**888* port=9997 _numberOfFailures=2
03-28-2022 04:50:44.070 +1100 ERROR TcpOutputFd - Read error. An existing connection was forcibly closed by the remote host.
03-28-2022 04:50:44.070 +1100 WARN TcpOutputProc - Applying quarantine to ip=*8888* port=9997 _numberOfFailures=2
Hi,
I'm also facing the similar issue. We have installed 9.0.2 version of Splunk UF on windows servers and connectivity towards DS all looks good. And firewall team and NSG also confirmed rules and routing are in place. Still, we are not able to see logs at Splunk console. We are getting error stating "existing connection forcibley closed by remote host" and "TCPout processor stopped to process the flow and blocked for seconds". Can you help us here with your inputs.
hi @Gayatri,
Did the windows server available on `Clients` tab on DS? if yes can you query the internal log for that windows server? You can used this query
index=_internal host=$windows-server-hostname$
If the log not available need to consider restart the Splunk UF on windows server and if already restart did you enable forwarding on Splunk UF installation?
Yes, we enabled the forwarding in UF installation and phoning home towards DS is happening and apps are pushed to UF successfully. And we tested connectivity and its successful. Still we are not able to see the logs in Splunk console. We are not sure whether there could be any issue at firewall or network level. Can you assist us here?
The question was not whether the deployment client part of the UF can connect to the DS because that's something that happens on a different port and using a different mechanism. The question is whether you're getting any internal forwarder's logs into the _internal index. I suspect you don't.
In that case you have to check the logs from the receiving side (indexer(s) probably) regarding connections from this UF.
yes, we are not receiving logs in _internal index. And here we are collecting logs from Windows servers and forwarding to Splunk console via Cribl workers. And at source end we have validated the connections towards Cribl workers and its working. But still we are not receiving logs at Cribl end. As connectivity is unidirectional from Windows servers towards Cribl workers, we validated it and its fine. We will validate the connectivity from Cribl workers towards Windows servers as well. If its not connecting, then its because of connectivity issue? if yes, what would be our next action on this?
But we found the error below in splunkd.log file. Can you confirm is it something related to permission issue which is restricting the Splunk UF to collect security logs from Windows servers? if yes, could you please suggest if we proceed with reinstallation with admin privilege would sort this issue?
ERROR: ExeProcessor: message from "C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" - WinEventCommonChannel- Did not bind to the closest domain controller, a further domain controller has been bound.
If you are not receiving any logs from this particular endpoint - it's the other side where you should look for answers. It should have more informations in its logs about why it closed the connection (there is also the possibility that both sides report the other side as responsible for closing the connection which would mean that you have some form of IPS or other network-level tool interfering with connectivity).
Also it's not about your receivers connecting to the Windows UF (because there is no such connectivity). It's about logs on the receiver's side.
BTW, adding Cribl to the mix complicates things. It might be a Cribl issue, not a UF one.
Your error has nothing to do with sending the events. It might affect collecting the windows event logs but it has nothing to do with sending the collected logs. If it causes issues, create a separate thread for it as it's unrelated to the main problem at hand - connectivity to the downstream receivers.
Check if tcp port was not setup or setup with wrong port in the inputs.conf file. Adding or correcting the tcp entry and restart Splunk will work.
Hope this helps !!! Good Luck
If you're using SSL, make sure you have the right certificate installed on both ends of the connection.
Check the indexer's logs to see if they offer more information about the problem.