Deployment Architecture

Is there a way for HF double forward to Splunk On-Prem and Cloud?

miwasef
Explorer

Hi all,

In our infrastructure we are integrating a heavy forwarder belonging to another company.
We would need this HF to send logs to both siems, below is a diagram:

In our company (APP1):
Universal Forwarder -> Heavy Forwarder -> Splunk Cloud

Company to integrate (APP2):
Universal Forwarder -> Heavy Forwarder -> Splunk On-Prem

here are the output files:

---APP1---

[tcpout]
defaultGroup = splunkcloud_APP1
useAck=true

[tcpout:splunkcloud_splunkcloud_APP1]
server = inputs1.APP1-splunkcloud.splunkcloud.com:9997, inputs2.APP1-splunkcloud.splunkcloud.com:9997, inputs3.APP1-splunkcloud.splunkcloud.com:9997, inputs4.APP1-splunkcloud.splunkcloud.com:9997, inputs5.APP1-splunkcloud.splunkcloud.com:9997, inputs6.APP1-splunkcloud.splunkcloud.com:9997, inputs7.APP1-splunkcloud.splunkcloud.com:9997, inputs8.APP1-splunkcloud.splunkcloud.com:9997, inputs9.APP1-splunkcloud.splunkcloud.com:9997, inputs10.APP1-splunkcloud.splunkcloud.com:9997, inputs11.APP1-splunkcloud.splunkcloud.com:9997, inputs12.APP1-splunkcloud.splunkcloud.com:9997, inputs13.APP1-splunkcloud.splunkcloud.com:9997, inputs14.APP1-splunkcloud.splunkcloud.com:9997, inputs15.APP1-splunkcloud.splunkcloud.com:9997
compressed = false

clientCert = /opt/splunk/etc/apps/APP1/default/APP1-splunkcloud_server.pem

sslCommonNameToCheck = *.APP1-splunkcloud.splunkcloud.com
sslVerifyServerCert = true
useClientSSLCompression = true
autoLBFrequency = 120

 

---APP2---

[tcpout:APP2]
server = 172.28.xxx.xxx:9997
autoLBFrequency = 180
compressed = true
clientCert = $SPLUNK_HOME/etc/auth/server.pem
sslPassword = []
sslRootCAPath = $SPLUNK_HOME/etc/auth/ca.pem
sslVerifyServerCert = false

So we have two apps and we tried to merge them, so as to have a single app with a single output file and the certificates in the same folder. We also implemented the necessary CMs for communications and created the same indexes on the splunk cloud.
We applied these configurations to the company's HF to be integrated. The problem is that it only communicates with its on-prem Splunk.
Thanks in advance.

 

Labels (3)
0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @miwasef ,

you have to merge the two outputs.conf in one as described in https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Route_inputs_to_s... 

Did you configured _TCP_ROUTING parameter in your inputs.conf?

Sometimes this was the issue.

I suppose that you already checked the connection between the App2 UF and the App1 HF.

Ciao.

Giuseppe

miwasef
Explorer

Hi Giuseppe,

 

Thanks for the reply.

Still doesn't work, i found this error:

06-28-2023 09:55:09.093 +0000 WARN  TcpOutputProc [32647 indexerPipe] - The TCP output processor has paused the data flow. Forwarding to host_dest=inputs1.APP1-splunkcloud.splunkcloud.com inside output group splunkcloud_20223906_9aaa4b04213d9a0a44dc1eb274307fd1 from host_src=APP2 has been blocked for blocked_seconds=120. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @miwasef,

this means that there's an error in the connection with Splunk Cloud.

So did you used the App downaloaded from Splunk Cloud for connection?

At first debug the connection without the second forwardring, then modify outputs.conf as I described but maintaining the configurations that you have from your Splunk Cloud.

Ciao.

Giuseppe

0 Karma

miwasef
Explorer

No i didn't, i just extracted my app folder from my HF and copied in the other one and then made a merge.

Do i have to install the app? 

The reason why I didn't install the app (but proceeded manually) is because we have to do a double forwarding and I don't want the old points to be changed afterwards.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @miwasef,

where do you want to locate the fork:

if in the UF, you don't need the app from Splunk Cloud, but only the outputs.conf to point to the HFs.

If in the HF, you need to start from the Splunk cloud app.

Ciao.

Giuseppe

miwasef
Explorer

Hi @gcusello ,

 

now it works on the new HF but not in the old that has always worked.

i get this error:

06-28-2023 13:23:40.046 +0000 WARN  TcpOutputProc [13655 indexerPipe] - The TCP output processor has paused the data flow. Forwarding to host_dest=172.xx.xx.xx inside output group group1 from host_src=APP2 has been blocked for blocked_seconds=30. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @miwasef,

let me summarize:

  • you want to send the logs of the UF2 both to HF1 and HF2,
  • HF1 sens to Splunk Cloud and HF2 send to Splunk On-premise,
  • is it correct?

if this is your requirement, you need to modify the outputs.conf in UF2 (only on UF2) merging the outputs.conf of UF1 and UF2, not using the defaukt group.

Do you enabled ack on both the HFs? it seems that you enabled only on HF1 and not also in HF2, is it correct?

Acking must be enabled on sender and receiver.

I try to merge the two outputs.conf, but check if I correctly reported all the items:

[tcpout:splunkcloud_splunkcloud_APP1]
server = inputs1.APP1-splunkcloud.splunkcloud.com:9997, inputs2.APP1-splunkcloud.splunkcloud.com:9997, inputs3.APP1-splunkcloud.splunkcloud.com:9997, inputs4.APP1-splunkcloud.splunkcloud.com:9997, inputs5.APP1-splunkcloud.splunkcloud.com:9997, inputs6.APP1-splunkcloud.splunkcloud.com:9997, inputs7.APP1-splunkcloud.splunkcloud.com:9997, inputs8.APP1-splunkcloud.splunkcloud.com:9997, inputs9.APP1-splunkcloud.splunkcloud.com:9997, inputs10.APP1-splunkcloud.splunkcloud.com:9997, inputs11.APP1-splunkcloud.splunkcloud.com:9997, inputs12.APP1-splunkcloud.splunkcloud.com:9997, inputs13.APP1-splunkcloud.splunkcloud.com:9997, inputs14.APP1-splunkcloud.splunkcloud.com:9997, inputs15.APP1-splunkcloud.splunkcloud.com:9997
compressed = false
useAck=true
clientCert = /opt/splunk/etc/apps/APP1/default/APP1-splunkcloud_server.pem
sslCommonNameToCheck = *.APP1-splunkcloud.splunkcloud.com
sslVerifyServerCert = true
useClientSSLCompression = true
autoLBFrequency = 120

[tcpout:APP2]
server = 172.28.xxx.xxx:9997
autoLBFrequency = 180
compressed = true
clientCert = $SPLUNK_HOME/etc/auth/server.pem
sslPassword = []
sslRootCAPath = $SPLUNK_HOME/etc/auth/ca.pem
sslVerifyServerCert = false

Ciao.

Giuseppe

miwasef
Explorer

Hi @gcusello ,

 

We finally got double forwarding working, the only problem is that the windows logs stopped reaching both siems. In this case where windows sending logs via agent, is there something different to do?

0 Karma

miwasef
Explorer

@gcusello do we have to do the same procedure that we made in the HF in the UF?

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @miwasef,

you could do the same thing also on HF, but remember that on HF you have to merge the two actual outputs.conf with also the one to Splunk Cloud.

But, in this case, you will end to both the environments all the logs passing through the second HF.

This is the usual approach I prefer, I don't like to put the fork on UF!

There could be a problem if you don't want to send all data passing through HF2 also to Splunk Cloud, because in this case you have to apply a filter on it.

Ciao.

Giuseppe

0 Karma
Get Updates on the Splunk Community!

Index This | Forward, I’m heavy; backward, I’m not. What am I?

April 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

A Guide To Cloud Migration Success

As enterprises’ rapid expansion to the cloud continues, IT leaders are continuously looking for ways to focus ...

Join Us for Splunk University and Get Your Bootcamp Game On!

If you know, you know! Splunk University is the vibe this summer so register today for bootcamps galore ...