Splunk Enterprise

Designing an Air-Gapped DR Architecture

khkl11
New Member

Hello everyone,

I currently have a Splunk Production environment ingesting 2 TB/day. I am planning to build a Disaster Recovery (DRC) site with an ingestion capacity of approximately 100 GB/day, but we have several strict constraints that we need to work around:

Network Isolation: There is no direct network communication between the Production and DRC environments, and opening network ports is not an option.

Searchability: We want to be able to search Production logs from the DRC site. If manual log transfer is required, we can automate the file movement through an intermediate staging process.

Availability: Even if the Production environment is completely down, we need the ability to perform searches on the DRC side. We are okay with "slow" search performance in this scenario.

Resource Constraints: It is not possible to scale the DRC hardware/infrastructure to match the size of the Production environment.

I am aware that this setup might not align with standard "best practices," but I am wondering if such a configuration is feasible but i would love to hear your ideas or alternative suggestions.

Please let me know if you need any further information. Thank you in advance for your help!

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Your requiremets are mutually exclusive.

"We want to be able to search Production logs from the DRC site" and "There is no direct network communication between the Production and DRC environments".

Yes, I know that "we can automate the file movement through an intermediate staging process" but this will not be searching production. You'd be searching a copy.

The way to go about it would be probably to set up a periodic process on the production site which would export closed (not hot) buckets to some "staging" area from which another process would pick them up and pull into the DRC site.

It would be messy, it would need quite a lot of bubble gum and prayer to work correctly (you'd need to keep track of which buckets have already been copier, which have not; if you have clusters, it gets even more complicated on both sides) but could work.

Splunk on its own doesn't have built-in functionality doing this.

isoutamo
SplunkTrust
SplunkTrust

Hi

your current requirements contains some elements which are not directly supported by Splunk multisite cluster. Also it has some other stuff which mens that this is not a community support case. You should contact either Splunk Professional Service or some local Splunk Partner who can help you.

r. Ismo

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Index This | What travels the world but is also stuck in place?

April 2026 Edition  Hayyy Splunk Education Enthusiasts and the Eternally Curious!   We’re back with this ...

Discover New Use Cases: Unlock Greater Value from Your Existing Splunk Data

Realizing the full potential of your Splunk investment requires more than just understanding current usage; it ...

Continue Your Journey: Join Session 2 of the Data Management and Federation Bootcamp ...

As data volumes continue to grow and environments become more distributed, managing and optimizing data ...