I have had a number of systems set up with a splunk forwarder. The forwarders are sending data, and our main splunk instance is happily indexing it. But today the person who runs the firewall that sits in front of these systems asked my why splunk would be trying to establish a TCP connection with these systems. These are being denied. Splunk tries it twice (TCP) on port 80, then twice on 443, then twice again on 8089.
Why is it doing that, and what is it trying to do? More importantly, should we be granting access to the splunk indexer to these machines on those ports, or is it not important?
As no one has answered this question, I have opened Case # 126600 with splunk support
You don't mention what operating system you are running on. The port 8089 connection will definitely be Splunk. 8089 is the listening RPC port for all Splunk instances. But 80 and 443 are just standard HTTP and HTTPS. It could be anything on the machine making the call.
If you are running on a Linux machine, a simple
as root or su will give you a snapshot of the current local and remote endpoints for processes with network sockets open.
Do you have an enterprise deployment? If so where are you running the licence server?
The operating system is Red Hat Linux. I don't see any open connections on any of those ports. But there would not be, as the firewall is denying attemps by the indexer to open connections on those ports on the forwarders. We set it up to allow the forwarders to contact the indexer, and all of that is working fine.
Also, these attemps on the three ports come in groups, and because of the 8089 connection I assumed splunk was doing all three.
Yes, enterprise. License server and indexer are on the same Linux platform.
Linux is Fedora 13 (Linux 220.127.116.11-69.fc13.i686.PAE)
Splunk is 4.3.1
I opened case 126600 with splunk support and with the help of Rajpal Bal finally got to the bottom of this. It turns out that the culprit was the app known as splunk_monitoring, which checks to see what forwarders are reachable via those ports.
Because we have deployment monitor on our machine and it keeps track of the forwarders (on the basis of which ones are actually active) I removed the splunk_monitoring app and the firewall quit logging these events. Done!