Hi,
We recently had to deploy a heavy forwarder into the Splunk architecture.
Last time, the flow was from a source->IDX.directly. Now we had to deploy like this : source-> HF-> IDX.
We have deployed and configured the heavy forwarder with forwarder license and configurations as follows:
*--Outputs.conf—
[tcpout]
defaultGroup = SplunkIdx
[tcpout: SplunkIdx]
Server = < SplunkIdxIPAddress>:9997
[tcpout-server://< SplunkIdxIPAddress>:9997>]*
The data input was configured on the heavy forwarder for the receiving from TCP port 514.
As for the indexer, the receiving port has been configured to accept 9997 port.
All of the firewall rules are provisioned accordingly. We tested the connection using netstat and telnet, all is alright.
One more thing to note:
Last time they were using index=emailsource.
Now, on the heavy forwarder, on the data input configuration, we have also used the same settings used by the indexer last time. The index name is also the same (emailsource).
Now, when we point the source to the heavy forwarder, we cannot see any logs/event coming in.
We check the splunkd logs,within the time frame, and we cannot see any error messages regarding anything.
Any possible cause as to why there are not events also coming in?
Thanks
Hi @francisbebita
This is poorly designed as you are not able to isolate the source of your problem since your HF is acting as a syslog server and as a forwarder. The best solution is to fix the architecture and use a syslog server (syslog-ng or rsyslog) to fetch the data and write it to files and then have your HF (or preferably a UF) read the data and forward it to your indexers. In such scenario both the syslog server and the HF/UF will run on the same machine so you don't have to add any extra instances.
This not only simplifies the design but also separate two important bricks of your architecture syslog and Splunk. By doing so you'll be able to easily find where your problem lies, is it the syslog that's not working or is it Splunk that's not working and on the long run it will be easier to know if it's the source that stopped sending logs or if its splunk that's not receiving by simply checking on those syslog files you're using as inputs.
Cheers,
David
Hi @francisbebita
This is poorly designed as you are not able to isolate the source of your problem since your HF is acting as a syslog server and as a forwarder. The best solution is to fix the architecture and use a syslog server (syslog-ng or rsyslog) to fetch the data and write it to files and then have your HF (or preferably a UF) read the data and forward it to your indexers. In such scenario both the syslog server and the HF/UF will run on the same machine so you don't have to add any extra instances.
This not only simplifies the design but also separate two important bricks of your architecture syslog and Splunk. By doing so you'll be able to easily find where your problem lies, is it the syslog that's not working or is it Splunk that's not working and on the long run it will be easier to know if it's the source that stopped sending logs or if its splunk that's not receiving by simply checking on those syslog files you're using as inputs.
Cheers,
David
Hi David,
Thank you for the insightful feedback.
We didn't know this is a more feasible way of deploying Heavy Forwarder.
We will propose the design and implement it, and use it on future designs.
This community really is so helpful, and im glad im a part of it.
Thanks again.
you're welcome Francis, feel free to reach out if you need help with the designs and please accept the answer if it was relevant !
Cheers.
Hi All,
Just an update for this issue.
We have isolated that the issue was on the F5 configuration.
The heavy forwarder can now pump logs to both indexers and 3rd party syslog server.
Thank you so much for all the inputs and the help.
Just a thought - who is the Splunk HF running as?
If you have configured a TCP input on port 514, your HF will need to be running as root (not ideal) or you need to have made specific changes to allow the "splunk" user to open low ports.
If you haven't done this, is it possible that Splunk is failing to open port 514, and instead, what is responding on that port is the local syslog service, which will not pass the events to Splunk (although, that's actually a better way to do it! See: https://www.splunk.com/blog/2016/03/11/using-syslog-ng-with-splunk.html)
try running this on your hf (as root, or a priv user):
netstat -lpen|grep 514
You should see something like:
tcp 0 0 0.0.0.0:514 0.0.0.0:* 600 22529 8207/splunkd
It must be outputs.conf
, NOT Output.conf
.
@woodcock Apologies for the inconsistencies and typos on submitting this question. The file is actually outputs.conf.. I will be more vigilant next time.
Are you receiving Splunk-internal events though? What does splunkd.log say?
@skalliger I am only receiving the error like the one below:
WARN TcpInputConfig - reverse dns lookups appear to be excessively slow, this may impact receiving from network inputs. 10.524853 % time is greater than configured rdnsMaxDutyCycle=10 %. Currently lookup: host::10.xx.xx.xx
So far, there are no other warning or error messages indicated on the splunkd.log.
I will try to work on the dns resolution also. See if that is the reason.
Would there be a chance that there are some other things to consider, aside from the outputs.conf and input.conf? By the way, the requirement is just to pump out all the logs coming into the HF, down to the IDX, with no consideration as to which logs are brought in and passed out.
You could try setting connection_host = none
if you're having reverse lookup problems and see if that helps.
In your outputs.conf it looks like you have a space in "tcpout: SplunkIdx" - you don't want that!
try this:
[tcpout]
defaultGroup = SplunkIdx
[tcpout:SplunkIdx]
server = SplunkIdxIPAddress:9997
you also dont need [tcpout-server://< SplunkIdxIPAddress>:9997>]
@nickhillscpl apologies for the typo but i double-checked and there is no space between the "tcpout:" and "SplunkIdx". I will take note of this syntax in the future, however. thanks for the warning.
Do you see the HF's internal logs in your indexer (search index=_internal source=*splunkd.log host=<HF name or address>
)? This will confirm if the connection from HF to indexer is functioning.
Please share the inputs.conf settings on the HF.
@richgalloway I searched for events using the one you provided. the indexer/search head gives out some outputs like the one below:
WARN TcpInputConfig - reverse dns lookups appear to be excessively slow, this may impact receiving from network inputs. 10.524853 % time is greater than configured rdnsMaxDutyCycle=10 %. Currently lookup: host::10.xx.xx.xx
For the inputs.conf,
[default]
host = <hostname of HF>
[tcp://514]
index = emailsource
disabled = 0
The cited log message doesn't tell us if events are being received from the HF.
I suggest you resolve the DNS problem first, however.
@richgalloway I will go ahead and have this error cleaned up first also. Thank you so much.
Sorry about the error on the post:
--Outputs.conf—
[tcpout]
defaultGroup = SplunkIdx
[tcpout: SplunkIdx]
Server = < SplunkIdxIPAddress>::9997
[tcpout-server://< SplunkIdxIPAddress>:9997>]
I would like to add that the heavy forwarder will not be indexing any data.