Getting Data In

Universal Forwarder - Configured but inactive forwarders.

JordanPeterson
Path Finder

I have a fresh install of 7.0.x in our QA environment to test with. I have an indexer/search head/deployment server running on a RHEL7 box. I have one Universal Forwarder on a Windows Server 2012 R2 box. I have configured the indexer to listen on port 9997 and it reports it is properly doing so when I run splunk display listen. I have the forwarder pointed to the indexer on that same port but when I run the list forward-server command I get the following:

Active forwards:
None
Configured but inactive forwards:
indexer.domain.com:9997

Where indexer.domain.com:9997 matches splunk show default-hostname.

When I run lsof -i TCP:9997 on my indexer I get back the following:

COMMAND   PID   USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
splunkd 86629 splunk  111u  IPv4 2544734      0t0  TCP *:palace-6 (LISTEN)

When I run splunk btool inputs list splunktcp --debug I get back the following:

/opt/splunk/etc/system/default/inputs.conf      [splunktcp]
/opt/splunk/etc/system/default/inputs.conf      _rcvbuf = 1572864
/opt/splunk/etc/system/default/inputs.conf      acceptFrom = *
/opt/splunk/etc/system/default/inputs.conf      connection_host = ip
/opt/splunk/etc/system/local/inputs.conf        host = indexer.domain.com
/opt/splunk/etc/system/default/inputs.conf      index = default
/opt/splunk/etc/system/default/inputs.conf      route = has_key:_replicationBucketUUID:replicationQueue;has_key:_dstrx:typingQueue;has_key:_linebreaker:indexQueue;absent_key:_linebreaker:parsingQueue
/opt/splunk/etc/apps/launcher/local/inputs.conf [splunktcp://9997]
/opt/splunk/etc/system/default/inputs.conf      _rcvbuf = 1572864
/opt/splunk/etc/apps/launcher/local/inputs.conf connection_host = ip
/opt/splunk/etc/apps/launcher/local/inputs.conf disabled = 0
/opt/splunk/etc/system/local/inputs.conf        host = indexer.domain.com
/opt/splunk/etc/system/default/inputs.conf      index = default

From my point of view everything is configured correctly. The firewall ports are still open from when we decommissioned our 6.5 QA machines.

When I check the splunkd.log on the indexer I can see these events post configuring the listener:

01-24-2018 17:11:04.311 -0600 INFO  TcpInputConfig - IPv4 port 9997 is reserved for splunk 2 splunk

01-24-2018 17:11:04.311 -0600 INFO  TcpInputConfig - IPv4 port 9997 will negotiate s2s protocol level 3

01-24-2018 17:11:04.312 -0600 INFO  TcpInputProc - Creating fwd data Acceptor for IPv4 port 9997 with Non-SSL

You can see the contents of my inputs.conf from the btool output above. The content of my outputs.conf from my forwarder looks like this:

[tcpout]
defaultGroup = default-autolb-group

[tcpout:default-autolb-group]
server = indexer.domain.com:9997

[tcpout-server://indexer.domain.com:9997]

The splunkd.log on my forwarder contains a lot of the following:

01-24-2018 17:59:06.807 -0600 WARN  TcpOutputProc - Cooked connection to ip=10.2.1.12:9997 timed out
01-24-2018 17:59:07.136 -0600 INFO  DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake
message to DS; err=not_connected

Which is showing the right IP and port but I don't understand why it's timing out. The firewall is configured properly because it hasn't been changed since we upgraded from 6.5 to 7.0 in this environment and we are using the same ports.

Any thoughts, comments, or advice is greatly appreciated.

Thank you.

0 Karma
1 Solution

JordanPeterson
Path Finder

I figured it out. On my 5th check of IPTables I caught that our 8089 and 9997 were below our reject all line.

Moved these two lines above the reject all and it fixed it:

-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 8089 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 9997 -j ACCEPT

View solution in original post

rajaguru2790
Explorer

Universal Forwarder - Configured but inactive forwarders

Please help me on this.

0 Karma

JordanPeterson
Path Finder

I figured it out. On my 5th check of IPTables I caught that our 8089 and 9997 were below our reject all line.

Moved these two lines above the reject all and it fixed it:

-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 8089 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 9997 -j ACCEPT

rajaguru2790
Explorer

I am also getting same error: Universal Forwarder - Configured but inactive forwarders

0 Karma

rajaguru2790
Explorer

Please explain this in Linux step by step as I am new with Linux.

Please explain how to do above using IPTables.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi JordanPeterson.
at first, did you checked the local firewalls on indexer (iptables) and firewall?
you can do it by forwarder using telnet.

If you already performed this test, let me check:

  • on indexer you configured reception by forwarders on port 9997: in one inputs.conf ($SPLUNK_HOME/etc/system/local/inputs.conf or $SPLUNK_HOME/etc/apps/serach) you have: [splunktcp://9997] connection_host = ip
  • on forwarder you have in $SPLUNK_HOME/etc/system/local/deploymentclient.conf: [target-broker:deploymentServer] targetUri= deploymentserver.splunk.mycompany.com:8089
  • on forwarder you have the above outputs.conf.

Did you have logs using this search index=_internal host=your_host ?

Bye.
Giuseppe

0 Karma

jitendragupta
Path Finder

I am trying to forward data to my Cloud based splunk. But when I am running splunk list forward-server, the IP address is showing as "Configured but Inactive.
Plz help.

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...