I have 4 Splunk Indexer nodes which are managed by a Splunk Master node. I create all Indexes on Master node and push it to all peer indexer nodes. Also I have a Splunk Search Head where i search data from all these 4 indexers. All this works fine.
I have 2 types of log sources -
a) one where Forwarders are installed which sends data to all these 4 indexers and
b)other where servers can send events/logs on Splunk TCP port (we can't install forwarders here so want to forward logs on Splunk TCP port)
Here for above (b) type of servers, if i open a TCP Port on Indexer node1 and configure the server it then i am able to receive the data on Indexer node1 and also able to search it on Search Head. But the problem which i am thinking here is:
a) If Indexer node1 is down then servers will not be able to send the logs. Meaning during downtime logs will be lost.
b) Similarly when indexer node1 is down, then search head will not be able to search the logs as logs are not replicated on remaining 3 indexer nodes.
So how to handle this scenarios? Is there any way we can configure load balanced TCP port which can replicate logs in all 4 indexer nodes? On Servers i can map only 1 TCP port so out of 4 indexers which one should i map? Or can i map Master node TCP Port? I tried map Master node TCP Port but it did not work.
But in those cases where we are using Splunk forwarders there we are not using the LB for this. And Splunk Master node is able to replicate the data into all 4 indexer nodes.
And if we configure the LB also then TCP request will hit LB and LB will route it to only 1 node (out of 4 nodes) so only that node will store that data and it will not be replicated in remaining 3 nodes. How do we manage the data replication here?