Getting Data In

Heavy Forwarder in DMZ

ianyoung1987
New Member

I have a segmented area of my network that I want to pull logs from a couple of systems. Rather than configure firewall rules for each system's Universal Forwarder to be able to hit my Indexers in the internal network, I have opted to implement a Heavy Forwarder for all systems to talk through. This way, I only have to punch one hole through the firewall, and I'm not directly exposing my Indexers to multiple systems within the DMZ, which is publicly accessible.

Within my Heavy Forwarder, I have configured the inputs.conf to accept splunktcp from 9997 and syslog on UDP 514 (For my network devices in the DMZ). outputs.conf is configured to send everything to my Indexers. web.conf is set to turn the web interface off. From my Search Head, I am able to see the _internal logs from my Heavy Forwarder. So I know it's at least talking to the Indexers.

Now, for my Universal Forwarders, I have set the following files, with the hope that deploymentclient traffic would get routed through to the internal deployment server, and that all log data would also get passed off. So far, I cannot find anything from these hosts in any index.

############################################
$SPLUNKHOME\etc\system\local\deploymentclient.conf
############################################

[deployment-client]
phoneHomeIntervalInSecs = 60

[target-broker:deploymentServer]
targetUri = <ForwarderFQDN>:8089

############################################
$SPLUNKHOME\etc\system\local\outputs.conf
############################################

[tcpout]
server = <ForwarderFQDN>:9997

############################################

I would assume that these two files would at least allow data to be sent to the Indexers. However, nothing is showing up.

As for my deployment client traffic, would I need to open 8089 on my inputs.conf? How would I route the traffic from there?

0 Karma

MuS
SplunkTrust
SplunkTrust

Hi ianyoung1987,

by doing this your universal forwarder will simply connect to port 8089 of the heavy weight forwarder and checks if it has a deployment app or not. The heavy weight forwarder will not route this traffic forward.

If you want to do such a thing, try this approach:

Your original deployment server has a second directory for the apps that the UF will get in example deployment-apps-hwf, put all apps that will be deployed to the UF's in there. Your servercalss.conf needs something like this:

[serverClass:<serverClassName>]
stateOnClient = noop
repositoryLocation = etc/deployment-apps-hwf
targetRepositoryLocation = etc/deployment-apps

Next your heavy weight forwarder needs to become deployment client of the deployment server, once this is done add the heavy weight forwarder to the serverclass as client.
This should deploy all apps from deployment-apps-hwf to $SPLUNK_HOME/etc/deployment-apps on the heavy weight forwarder. Once this is done configure the server classes on the heavy weight forwarder to deploy the apps to the UF's.

Given everything is done correctly you will have the wonderful messy setup of a layered deployment server 😉

This is an example of just because you can, you should not necessary do it - try to deploy the configs to the UF using a different tool like Ansible or Puppet - you will have less trouble 😉

Hope this help anyway ...

cheers, MuS

wrangler2x
Motivator

There is also a section titled, "Example: How to propagate apps from Primary to Secondary Deployment Server" in this Splunk wiki page: Deploy:DeploymentServer

0 Karma

marrette
Path Finder

Did you ever work out a solution for this? I am also trying to deploy a Splunk app to a universal forward behind a firewall via a heavy forwarder (and would prefer to not just to have to deploy the app manually to the universal forwarder)

0 Karma