Hi,
I have the following outputs.conf on my Splunk Heavy Forwarder:
defaultGroup = indx1
[tcpout:indx1]
server=1.1.1.1:9997
[tcpout:indx2]
server=2.2.2.2:9997
As expected, the heavy forwarder is forwarding all data to the indx1.
Then I manually stopped the Indexer indx1 expecting that Splunk will start sending the data to indx2 as indx1 is not available, but this did not happen and all forwarding was blocked by the heavy forwarder waiting for indx1 to come back online. Logs below.
05-31-2021 18:14:45.039 +0000 WARN TcpOutputProc [16471 indexerPipe] - The TCP output processor has paused the data flow. Forwarding to host_dest=1.1.1.1 inside output group indx1 from host_src=splunk-hf has been blocked for blocked_seconds=480. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.
Is this the expected behavior ? Should the heavy forwarder not switch to indx2 once indx1 is down ?
Thanks,
Termcap
Hi @termcap
Seems there is no out-of-the box Splunk solution.
How to configure outputs.conf to forward data in a... - Splunk Community
---------------------------------
An upvote would be appreciated if it helps!
Hi @termcap
'defaultGroup' not required if your HF version is above 4.2. Following setting works for your cases and loadbalance among both indexers.
[tcpout:indx1and2]
server=1.1.1.1:9997,2.2.2.2:9997
You can refer this link for detailed info - outputs.conf - Splunk Documentation
------------------------------------------------------------
An upvote would be appreciated if it helps!
Hi @venkatasri ,
Thanks for the alternative outputs configuration. I was wondering is it possible to do what I tried to do in my output configuration.
Keep sending to indx1 and switch to indx2 only if indx1 is down and then switch back when indx1 comes back online. (Not loadbalancing)
Thanks,
Termcap