Splunk Search

Use an intermediate universal forwarder

gcusello
SplunkTrust
SplunkTrust

Hi at all,
I need to send logs from many Universal Forwarders to an Indexer Cluster using an Intermediate Forwarder.
If I use an Heavy Forwarder, I have cooked data and I cannot tranforms them on Indexers and in addition Heavy Forwarder adds metadata so transmission is slower.

Solution is to send uncooked data to Indexers, so:
can I use an Heavy Forwarder as Intermediate layer to concentrate logs sending them uncooked to Indexers?
or can I use a Universal Forwarder as Intermediate layer to concentrate logs sending them uncooked to Indexers?

Thank you in advance.

Bye.
Giuseppe

0 Karma
1 Solution

xpac
SplunkTrust
SplunkTrust

Hey,

Yes, you can easily use an UF as intermediate, it won't mess with your data at all, we do this in multiple projects. Remember to change a few limits (anybody said maxKbps?) for such an intermediate forwarder so it doesn't become a bottleneck.
Yes,you can also do this with an HF, but you already listed all the disadvantages of that, so I'd try to avoid it if possible. If not possible, you can either put all your indexer add-ons on the HFs, too, or send it as uncooked data, but that would also remove identifying information from the UFs like host, sourcetype etc, so I would definitely not advise to do that.

Hope that helps - if it does I'd be happy if you would upvote/accept this answer, so others could profit from it. 🙂

View solution in original post

xpac
SplunkTrust
SplunkTrust

Hey,

Yes, you can easily use an UF as intermediate, it won't mess with your data at all, we do this in multiple projects. Remember to change a few limits (anybody said maxKbps?) for such an intermediate forwarder so it doesn't become a bottleneck.
Yes,you can also do this with an HF, but you already listed all the disadvantages of that, so I'd try to avoid it if possible. If not possible, you can either put all your indexer add-ons on the HFs, too, or send it as uncooked data, but that would also remove identifying information from the UFs like host, sourcetype etc, so I would definitely not advise to do that.

Hope that helps - if it does I'd be happy if you would upvote/accept this answer, so others could profit from it. 🙂

hectorvp
Communicator

Thanks for this answer,well I've few more doubts as:

 

I'm using UF as Intermediate forwarder

Now I've to send data to different splunk enterprise instances based on the host ip address, how can I achieve this?

I'm listening on signal ports which has mixed data of two different host IP ranges and I need it somewhat like 

Host ip range 1 -> splunk enterprise 1

Host ip range 2 -> splunk enterprise 2 

[Splunktcp:9991]

_tcp_routing=<target_group>

???

 

How I'll do routing based on host ranges?

Or I need heavy forwarder?

 

Well if i gave 200 servers where I'm collecting onlyOonly logs and not Application logs, will it cause bottleneck?

0 Karma

gcusello
SplunkTrust
SplunkTrust

Are you saying that using a Universal Forwarder instead an Heavy Forwarder, Indexers receive uncooked data?
My problem is that I'd like to have te possibility to transform data on Indexers because intermediate forwarders are out of my control and they send more data on the network.
Thank you for the "maxKbps" suggest.

Thank You.
Giuseppe

0 Karma

xpac
SplunkTrust
SplunkTrust

Yes, UFs don't parse data, so you can still do that on the indexers. HFs can also output uncooked data, there is an option for that in outputs.conf, however I've no experience with it and only heard(!) that it might still cause trouble.
If you have the choice, and no need for pre-processing, but only use it as a concentrator, maybe plus syslog collector, I'd always choose a UF and do the parsing on the indexers. It also saves on network bandwidth, because cooked data is 2-3 times "heavier". 😉

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...