Hello All ,
I have a Heavy forwarder where I created an Http Event Collector token and data comes from that token. But we want to use another HF in case if the other HF is down and so that data streaming won't stop. What are the possible options?
How can you use the same token for both HF and how can we load balance it?
If this is possible, how would you create the same token and create input for the new one?
Most people usually manage their HFs with Deployment Server.
I would add that Splunk itself doesn’t handle a Load balanced group between HEC’s. If you have a system that can provide a an auto LB group in front of your heck’s you can have the same inputs deployed to both servers as listed above.
Additionally you should really have input queue depth monitors exposed to the auto LB in times when 1 load balance group is being picked on it can be removed from the group until it’s queue depth on the input stream is resolved.
See :https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Inputsconf#FIFO_.28First_In.2C_First_Out_qu... for information about load balancing loads between the input queues.
token information is stored in the inputs.conf and outputs.conf files. Depending on your information they might be located in different locations. You "install" the same token on both HF by copying the respective stanza to both systems and restart splunk on the HF.
You can locate the correct file by using
splunk btool --debug inputs list | grep <your token> on the HF
Please note that you need a load balancer in front of your splunk HF cluster in order to "fail over" the http requests transparently unless your senders are smart enough to switch themselves.
Hope it helps
Thank You @ololdach for your reply .So having a load balancer in front of the two forwarders will solve the issue .How will the URI will match , as two HF will have two URI and which one needs to be considered as URI? and where is this set up done
Hi vrmandadi, I can only answer your question how to make two heavy forwarder accept the same HEC token. There are other documents on the web describing how to set up a load balancer and this article has a load of information about your scenario: http://dev.splunk.com/view/event-collector/SP-CAAAE73