Good day Splunkers,
We have two site/DCs, where one is production and the other a standby DR. In our current architecture, we have intermediate forwarders that forwards the logs to Splunk Cloud. All universal forwarders send metrics/logs to these intermediate forwarders. We also have a single deployment server. The architecture is as follows:
UF -> IF -> SH (Splunk cloud)
The intermediate forwarders are Heavy Forwarders, they do some indexing, and some data transformation such as anonymizing data. The search head is on the cloud.
We have been asked to move from the current production-DR architectural setup to an multi-site (active-active) setup. The requirement is for both DCs to be active and servicing customers at the same time.
What is your recommendation in terms of setting up the forwarding layer? Is it okay to provision two more intermediate forwarders on the other DC and have all universal forwarders send to all intermediate forwarders across the two DCs. Is there a best practice that you can point me towards.
Furthermore, do we need more deployment servers.
Extra Info: The network team is about to complete network migration to Cisco ACI.
Hi @alec_stan ,
it's surely useful to have at least one or two HFs in the secondary sites to have HA on all the layers of your infrastructure; the number depends on the traffic that they have to manage.
About DS, you can continue to have only one DS, it isn't mandatory to have a redundant infrastructure for this role, because, in case of fault of the primary site, the only limitation is that you cannot update your Forwarders for limited time.
The opportunity of having a second DS is related to the number of Forwarders to manage or if you have a segregated network, it isn't related to HA.
About the configuration of the Forwarders layer, you have to configure all of them to send their logs to all the HFs in auto load balancing mode and then Splunk will manager the data distribution and fail over.
Ciao.
Giuseppe
Hi @alec_stan ,
it's surely useful to have at least one or two HFs in the secondary sites to have HA on all the layers of your infrastructure; the number depends on the traffic that they have to manage.
About DS, you can continue to have only one DS, it isn't mandatory to have a redundant infrastructure for this role, because, in case of fault of the primary site, the only limitation is that you cannot update your Forwarders for limited time.
The opportunity of having a second DS is related to the number of Forwarders to manage or if you have a segregated network, it isn't related to HA.
About the configuration of the Forwarders layer, you have to configure all of them to send their logs to all the HFs in auto load balancing mode and then Splunk will manager the data distribution and fail over.
Ciao.
Giuseppe
Hi @gcusello
Thank you for quick response.
That means we do not need to do any form of clustering. On our current setup, we have two Intermediate Forwarders and they do not store any copy of the data and no clustering.
From what you are saying, we should deploy two new forwarders on the other site, configure all intermediate forwarders to now point to four intermediate forwarders (two on DC1, two on DC2).
Thanks again.
Hi @alec_stan ,
Splunk hasn't any kind of clustering at Forwarders level, you have to configure your DS to deploy the same configurations to all the HFs.
Hi @gcusello
Great thanks.
Hi @alec_stan ,
good for you, see next time!
Ciao and happy splunking
Giuseppe
P.S.: Karma Points are appreciated 😉