Deployment Architecture

How to ingest data from a different organizational network?

adnankhan5133
Communicator

We are currently working on an engagement for a Fortune 500 company which has multiple projects being run by different consulting firms and vendors. Each project has its own network which is accessible over VPN. We have Splunk ES running for Project 1 in the Project 1 network and it's ingesting various security log sources (i.e. IAM, network traffic, endpoint detection). Now, there is an ask to ingest the security log sources from Project 2 as well. Project 1 and Project 2 environments are separated by network firewalls and currently do not have any connectivity with each other.

If I wanted to pull Project 2 logs over to Splunk (Project 1) through various data ingestion mediums, such as syslog, UF's, DBConnect, and HF (for HEC and REST API inputs), would I need to ensure that a network tunnel is setup between Project 1 and Project 2 to allow the flow of data ?

At a high level, the network architecture between P1 and P2 looks like this:

P2 ---> P2 External Firewall <no connectivity> P1 External Firewall <---- P1

0 Karma

adnankhan5133
Communicator

Thanks @Richfez - We have a distributed clustered Splunk Enterprise and ES deployment residing within P1, so our SH's, MC, IDX cluster, CM, and DS are all located there. Over on P2, there are no Splunk instances or forwarders set up yet. Since we're actually going to be using Equinix Cloud Exchange (cloud interconnection solution) to enable connectivity between P1 and P2, I'm wondering if any VM's and devices with UF's installed on P2 would simply need to be point to to the P1 indexer clusters, while any HF's installed on P2 would forward to the P1 search heads. With that said, would the P2 logs need to route through Equinix, and over to the P1 indexers and search heads, assuming that FW rules are in place to enable the logs to reach their intended destination?

 

We have considered Splunk Cloud, but Management is looking into that as a possible option once our 3 year license expires by end of next year.  

0 Karma

Richfez
SplunkTrust
SplunkTrust

Frankly, this sounds like a great place to use Splunk Cloud.  Fling all your data at Splunk Cloud from wherever you collect it.  Throw a UF on each log collection node, point its output to your Cloud instance.  Problem solved.

But the same can be done without cloud too.  

Mostly the same as above with a UF on each item that accumulates logs that you want to read, only this time point it to your indexing tier. 

Your indexers must exist somewhere, so you'll need to make exceptions in your firewall for that location for the incoming 9997 traffic from the UFs from the IPs they're expecting to send from, and set up certificates and all that fancy stuff.  And please don't use an intermediate forwarder to aggregate it into a single connection - just open the "holes" (I hate calling them that) to all indexers and let Splunk do the work to LB between the indexers- it will work better, faster, and be no less secure.

Then your one central location (which can be inside P1 if that's where it lives now) gets all the logs.

0 Karma
Get Updates on the Splunk Community!

Index This | Forward, I’m heavy; backward, I’m not. What am I?

April 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

A Guide To Cloud Migration Success

As enterprises’ rapid expansion to the cloud continues, IT leaders are continuously looking for ways to focus ...

Join Us for Splunk University and Get Your Bootcamp Game On!

If you know, you know! Splunk University is the vibe this summer so register today for bootcamps galore ...