So we are trying to design a solution and we want two layers for forwarding . At the first layer universal forwarder collect the logs and forward it to two different data center at the second layer which is an intermediate or heavy forwarder . Now either of heavy forwarder which have the data will send it to indexer layer.
Is my understanding correct and is it doable ?
aab5272,
You do not need to have an intermediate forwarding layer to avoid the situation you present, if you in your outputs.conf file create a tcpout stanza like below
[tcpout]
defaultGroup = lb
[tcpout:lb]
server = 1.2.3.4:4433, 1.2.3.5:4433
If indexer #1 in data center #1 goes down, all of the data will be sent to #2, and the same in reverse.
You can add as many as you want here, best practice would be to configure this to send to all of your indexers, and then if one data center goes down the other one will get the data.
You can control search-ability and copies of your data by setting up multi-site indexer clustering. this will allow you to replicate data, as well as ensure search-ability should one data center go down.
The problem you create with out an index cluster is if data center one goes down, the data on the indexers located there is not searchable. creating incomplete results.
With a cluster, you can get all of the data you have indexed across both data centers.
aab5272,
You do not need to have an intermediate forwarding layer to avoid the situation you present, if you in your outputs.conf file create a tcpout stanza like below
[tcpout]
defaultGroup = lb
[tcpout:lb]
server = 1.2.3.4:4433, 1.2.3.5:4433
If indexer #1 in data center #1 goes down, all of the data will be sent to #2, and the same in reverse.
You can add as many as you want here, best practice would be to configure this to send to all of your indexers, and then if one data center goes down the other one will get the data.
You can control search-ability and copies of your data by setting up multi-site indexer clustering. this will allow you to replicate data, as well as ensure search-ability should one data center go down.
The problem you create with out an index cluster is if data center one goes down, the data on the indexers located there is not searchable. creating incomplete results.
With a cluster, you can get all of the data you have indexed across both data centers.
aab5272,
the first question is often why? Is it to avoid additional firewall rules or for data protection etc? If you want to send to, two different data centers from the same forwarder, you do not have to have a heavy forwarder as an intermediate. should you want / have a business case to have a heavy forwarder in between the forwarding nodes, and the indexing layer you can, but often there are other solutions that might have better performance.
What is the end result you are trying to accomplish? and what are the constraints that are making you think you need a heavy forwarder in place at each data center?
Thanks for the response.
let's say at 2 nd layer forwarder are intermediate forwarder. This is for data loss in case if one data centre goes down the other takes the data from the universal forwarder and send it to indexer.
Does that make more clear?
If one data center went down, would the "downed data center" still produce logs? If so then the logs will sit on the file system until the data center is brought back up and sent to Splunk without missing a beat.
I don't think you need an intermediate forwarding layer. You can install universal forwards to send directly to your indexer layer