Hi,
I set up an universal forwarder on a docker container : 172.17.0.3
I configurated it to forward data to 172.17.0.10:9997 (my VirtualBox VM ip's)
I first enabled port 9997 port to listen on my splunk enterprise instance web API.
connexion between VM and docker is a bridge on the docker0 interface.
I can ping the VM on my container and vice versa.
I checked the connexion between the UF and the VM using ss -ant | grep "9997"
and I got :
LISTEN 0 128 0.0.0.0:9997 0.0.0.0:*
As i'm new into networking, I'm clueless on how to make the connexion works.
Thank you all for your help
Update :
It was effectively a misconfiguration of the network.
I haven't found a solution for this problem, so I worked around by making an instance of splunk enterprise in a docker container instead of a VM, and now I can see my docker's logs on Splunk enterprise.
Traceroute showed me that sending packets from the container to the VM doesn't work. In the other way, it's functioning correctly.
(I tried this after flushing all iptables rules and verifying that there wasn't any firewalld service activated.)
Thank you all for your help !
Hi @splunkatt
was looking at your outputs.conf
is it not you forgot to append the port with the server you seem to be missing that
[tcpout]
defaultGroup = default-autolb-group
[tcpout:default-autolb-group]
server=172.17.0.10:9997
[tcpout-server://172.17.0.10:9997]
Also when you say "I first enabled port 9997 port to listen on my splunk enterprise instance web API" i hope you had configured in the setttings>>forwarding and recieving>>Recieve data >>9997
Hi @splunkatt ,
1- Check on instance (Search Head) Is it listening the events. You can check notification in top.
Might you forget to "create of INDEX "
2- Check Splunkd.log file in UF
also run below command in UF to verify the monitor file:
C:\Program Files\SplunkUniversalForwarder\bin>splunk list monitor
Thank you
How do you know that the UF is not connecting to your server?
First things first - try to connect to your indexer (splunk server) on port 9997 from your docker container using "normal" means (like telnet or netcat).
Check your _internal index for events from the UF.
Check /opt/splunk/var/log/splunk/splunkd.log in the container.
How did you "configure it to forward data"? Verify it with
/opt/splunk/bin/splunk btool outputs list --debug
in the container
Hi PickleRick, dhirendra761,
Thank you for these quick answers and your help !
Telnet can't connect when I type :
telnet 172.17.0.10 9997
on the docker container.
/opt/splunkforwarder/etc/system/local/outputs.conf :
[tcpout]
defaultGroup = default-autolb-group
[tcpout:default-autolb-group]
server=172.17.0.10
[tcpout-server://172.17.0.10:9997]
splunkd.log on container print these warnings.
03-28-2022 13:10:54.135 +0200 WARN AutoLoadBalancedConnectionStrategy [439 TcpOutEloop] - Cooked connection to ip=172.17.0.10:9997 timed out
TcpOutputProc [438 parsing] - The TCP output processor has paused the data flow. Forwarding to host_dest=172.17.0.10 inside output group default-autolb-group from host_src=b2e058553000 has been blocked for blocked_seconds=5600. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.
command "splunk list monitor" print log files and directories. seems like there is nothing wrong, there.
I'm sorry if there are missing informations
@splunkatt Go for this document and check if you are missing something.
https://www.learnsplunk.com/splunk-forwarder-not-sending-data.html
I checked the document.
The tcpdump steps got me this output :
14:37:55.532330 IP 172.17.0.3.49854 > debian.9997: Flags [S], seq 1200675435, win 64240, options [mss 1460,sackOK,TS val 3005873483 ecr 0,nop,wscale 10], length 0
14:37:56.553739 IP 172.17.0.3.49854 > debian.9997: Flags [S], seq 1200675435, win 64240, options [mss 1460,sackOK,TS val 3005874504 ecr 0,nop,wscale 10], length 0
14:37:58.569660 IP 172.17.0.3.49854 > debian.9997: Flags [S], seq 1200675435, win 64240, options [mss 1460,sackOK,TS val 3005876520 ecr 0,nop,wscale 10], length 0
14:38:02.793699 IP 172.17.0.3.49854 > debian.9997: Flags [S], seq 1200675435, win 64240, options [mss 1460,sackOK,TS val 3005880744 ecr 0,nop,wscale 10], length 0
14:38:10.985799 IP 172.17.0.3.49854 > debian.9997: Flags [S], seq 1200675435, win 64240, options [mss 1460,sackOK,TS val 3005888936 ecr 0,nop,wscale 10], length 0
14:38:15.454976 IP 172.17.0.3.49856 > debian.9997: Flags [S], seq 447694644, win 64240, options [mss 1460,sackOK,TS val 3005893405 ecr 0,nop,wscale 10], length 0
14:38:16.458322 IP 172.17.0.3.49856 > debian.9997: Flags [S], seq 447694644, win 64240, options [mss 1460,sackOK,TS val 3005894408 ecr 0,nop,wscale 10], length 0
14:38:18.473564 IP 172.17.0.3.49856 > debian.9997: Flags [S], seq 447694644, win 64240, options [mss 1460,sackOK,TS val 3005896424 ecr 0,nop,wscale 10], length 0
14:38:22.505569 IP 172.17.0.3.49856 > debian.9997: Flags [S], seq 447694644, win 64240, options [mss 1460,sackOK,TS val 3005900456 ecr 0,nop,wscale 10], length 0
14:38:30.697704 IP 172.17.0.3.49856 > debian.9997: Flags [S], seq 447694644, win 64240, options [mss 1460,sackOK,TS val 3005908648 ecr 0,nop,wscale 10], length 0
I suppose that a length of 0 for tcp packets isn't normal ?
It is the length of the payload so since it's the initial SYN packet, it's normal for it to be zero.
Is this the tcpdump of the destination server?
If so, either something is blocking the traffic on the destination server (firewall?) or you're having problems with rp_filter which drops the traffic.
Yes, it's the tcpdump of the VM, where is the splunk enterprise instance.
Are virtualbox or docker naturally blocking tcp packets ?
When I rerun the same tcpdump, it prints when I finish it
10 packets captured
10 packets received by filter
0 packets dropped by kernel
It all depends on your configuration so it's hard to say what are your rules. Usually, unless you explicitly opened the ports, modern distributions allow only administrative traffic (like ssh) and prohibit other packets so you might need to fiddle with your firewall.