Deployment Architecture

HF and DS for DR

Vnarunart
Explorer

As a Splunk newcomer, I need guidance on using Splunk effectively to send logs to a Disaster Recovery (DR) environment where I have one Heavy Forwarder (HF) and one Deployment Server (DS) on-premises.

What steps should I take with my HF and DS to ensure smooth log ingestion into the DR Splunk Cloud instance?

I have considered replicate vm ( HF and DS) as a possible solution, but I am still not sure about the best approach. Please advise on the following:

- Are there any specific licensing requirements or restrictions for replicating Splunk instances?
- What are the potential performance implications of replicating a Splunk VM, especially considering the data volume and real-time or near real-time requirements?
- Are there any recommended best practices or configurations for replicating HF and DS VMs to a DR environment?"

Thank for your help.

Labels (1)
0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @Vnarunart ,

this is a question for a certified Splunk Architect or a Splunk PS, not for the Community.

Anyway, as also @PickleRick said, there are no problems for the DS because your infrastructure continues to run also without it, the only issue, in case of DR, is that you cannot performa a forwarders configuration update, so you don't need to put it in the DR site, or you can use a passive DS.

It's different for the HF, because you should analyze which data flows pass through the HF and then configure a Load Balancer to send the traffic also to the DR HF, but, as I said, this is an architectural analysis and it's difficoult to perform in Community.

Ciao.

Giuseppe

0 Karma

PickleRick
SplunkTrust
SplunkTrust

For HA DS setups you usually use DNS-based load-balancing/fail-over. It's relatively easy because - again - you usually don't care much about the DS's state.

With HFs which are _only_ a parsing tier, it's also relatively trivial - you just set up multiple HFs and LB your traffic to them on source UFs (or HTTP LB if you're using HEC). It's getting way more problematic if you want to _pull_ some data with a modular inputs on HF (like DBConnect, some addons for cloud services and such). That's getting tricky. I think there was even a .conf presentation about HF replication but I can't find it at the moment.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Your license only measures how much data you're ingesting daily. (or how much compute power you use on indexing and search tiers but that's a relatively uncommon scenario). Splunk doesn't care how many additional components you have. In some specific scenarios (like detached environment) you might need a no-ingest license for forwarders.

Question is what are you doing on HFs - are you running any modular inputs on them or is it just a parsing layer before indexers? With modular inputs the critical item is input's state because what you don't want is that during failover you ingest all the data from the start again.

Deployment server is a bit easier since DS serves mostly "static" content. There are a few scenarios of HA installations covered by Core Services Implementation course. - either a parent/children situation or a sibling replication. And with relatively new Splunk release you can also create a clustered DS setup https://docs.splunk.com/Documentation/Splunk/latest/Updating/Implementascalabledeploymentserversolut...

0 Karma
Get Updates on the Splunk Community!

New Case Study Shows the Value of Partnering with Splunk Academic Alliance

The University of Nevada, Las Vegas (UNLV) is another premier research institution helping to shape the next ...

How to Monitor Google Kubernetes Engine (GKE)

We’ve looked at how to integrate Kubernetes environments with Splunk Observability Cloud, but what about ...

Index This | How can you make 45 using only 4?

October 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...