I recently installed universal forwarder on a Windows machine, aiming to forward logs from there to the Splunk Enterprise server instance that we have.
All went good during the installation, but for some reason I can't see any logs on Splunk. After some reasearching here and there, I found that actually, forwarder establishes a TCP connection with Splunk on the defined receiving port (9998 as I configured) and that is how logs are forwarded (as opposed to syslog which is a one way communication).
So, I did netstat on the forwarder to identify connections and I found that forwarder indeed has connection to the Splunk server on 9998, but this closes every some seconds forcing it to create a new socket again and again.
My questions are:
1) Would that this be a problem as to why I am not seeing any logs on Splunk? Could be possibly some kind of attributes negotiated on TCP that do not fulfill both sides? Is port 9998 the only one needed for that communication or another port is opened in the meanwhile.
1) While installing forwarder, I chose it to send all the logs on the receiving end.
2) Port 9998 as you can assume is open on the firewalls since TCP connection is established.
3) On the Splunk server side, receiving port 9998 is enabled.
I am sure that is something very simple, but if you could please give a hint to go forward with that I would be grateful.
Has it ever been able to send logs to your indexers? If not, verify that your universal fowarder can:
telnet splunkservername 9997
If that is successful, rule out slow/problematic dns resolutions by configuring your hosts file to map that server name to its IP. Report back after ruling these out. If you try both of these and there are still issues, chances are the problem is on the indexer and not your client sending data. You can also try setting connection_host=false in your inputs.conf for the 9997 stanza, making sure the indexer isn't trying to resolve the name of the forwarder.
Many thanks for your suggestion. Much appreciated.
The way I solved this out finally is very simple.
1) I downloaded the Splunk_TA_Windows from here: https://apps.splunk.com/app/742/
2) I extracted it
3) I put the extracted folder into the C:\Program Files\SplunkUniversalForwarder\etc\apps folder
4) Then I just enabled the logs that I was interested in by playing with the inputs.conf file under C:\Program Files\SplunkUniversalForwarder\etc\apps\Splunk_TA_Windows\default
5) Finally, from the CLI and while being on the SplunkUniversalForwarder\bin file, I ran splunkd.exe restart.
It worked like a charm!
Some questions to start troubleshooting.
Thanks for getting back to me.
17:32:48.860 +0000 INFO loader - Limiting REST HTTP server to 682 threads
17:32:49.282 +0000 INFO TcpOutputProc - Connected to idx=x.x.x.x:9998
17:41:48.551 +0000 INFO TcpOutputProc - Detected idx=x.x.x.x:9998 shutting down
17:41:48.551 +0000 INFO TcpOutputProc - Will close stream to current indexer x.x.x.x:9998
17:41:48.551 +0000 INFO TcpOutputProc - Closing stream for idx=x.x.x.x:9998
17:42:08.536 +0000 WARN TcpOutputProc - Raw connection to ip=x.x.x.x:9998 timed out
17:42:08.536 +0000 INFO TcpOutputProc - Ping connection to idx=x.x.x.x:9998 timed out. continuing connections
17:42:08.927 +0000 WARN TcpOutputProc - Cooked connection to ip=x.x.x.x:9998 timed out
17:42:21.912 +0000 INFO TcpOutputProc - Connected to idx=x.x.x.x:9998
17:45:18.573 +0000 INFO TcpOutputProc - Detected idx=x.x.x.x:9998 shutting down
17:45:18.573 +0000 INFO TcpOutputProc - Closing stream for idx=x.x.x.x:9998
No I haven't done that 😞
I mean that I configured the Splunk receiving port. TCP port should be open though since I have a connection established (I hope I am right though).
Is there anything that I could do to overcome this?
I suggest you to rise the logging level for TcpOutputProc. You can do this using web interface or editing "log.cfg" directly (in this case, changes will persist even when you restart Splunk). More details available here. Could you send some logs, when you have changed this configuration?
In the past I saw some similar connection problems due to SSL misconfigurations in Splunk-to-Splunk connections (i.e. forwarder uses SSL, while indexer does not, or vice-versa). However I must say that it does not look your case.
Could you please try to run the following commands and send resulting output?
On the Universal Forwarder:
On the Indexer: