Getting Data In

Sending data from one UF to other UF

ankithreddy777
Contributor

Can we send cooked data from one universal forwarder to other Universal Forwarder by enable [splunktcp] on receiving UF to read cooked data from first UF.

Does splunktcp can be enabled on UF by making it a receiver similar to HF.

If splunktcp is enabled, do we need to give queue=value, or splunk automatically puts receiving data to outputs queue if forwarding is enabled on receiving UF.

0 Karma
1 Solution

esix_splunk
Splunk Employee
Splunk Employee

Cutting down the proverbial noise here, and some additional factoids about the every present debate about using a UF vs HF, and using a UF as an intermediate forwarder (IF Tier.)

First, yes UF's can pass to other UFs (and other UFs and other UFs) daisy chained all the way through the indexing tier. However, in most cases, this isnt the most ideal architectural solution due to potential for funneling of "many to one" and event spread / balance across indexers.

With that being said, many customers use intermediate tiers due to security or network limitations / restrictions bound by their organizations. Focusing on the question here : Yes, you can forward from a UF -> UF -> XXX / Indexer. The setup is, as mentioned a Splunk TCP input on the "intermediate" UF (this is the outputs on the far left sending UF) and the Middle (Intermediate) UF also needs a outputs. Is this Splunk Cooked? Technically, its half baked.. There is meta data added to the stream, but we're not fully aware of the events.

Now Im not going to dig into the differences here between a UF and a HF, aside from saying the UF only has the input / output queue, whereas the HF's have all the processing queues {PArsing -> Merging -> Typing -> Index } . This means for cases you dont need to parse / filter, or GUI, the UFs will stream much faster.

More reading : https://www.splunk.com/blog/2016/12/12/universal-or-heavy-that-is-the-question.html

View solution in original post

esix_splunk
Splunk Employee
Splunk Employee

Cutting down the proverbial noise here, and some additional factoids about the every present debate about using a UF vs HF, and using a UF as an intermediate forwarder (IF Tier.)

First, yes UF's can pass to other UFs (and other UFs and other UFs) daisy chained all the way through the indexing tier. However, in most cases, this isnt the most ideal architectural solution due to potential for funneling of "many to one" and event spread / balance across indexers.

With that being said, many customers use intermediate tiers due to security or network limitations / restrictions bound by their organizations. Focusing on the question here : Yes, you can forward from a UF -> UF -> XXX / Indexer. The setup is, as mentioned a Splunk TCP input on the "intermediate" UF (this is the outputs on the far left sending UF) and the Middle (Intermediate) UF also needs a outputs. Is this Splunk Cooked? Technically, its half baked.. There is meta data added to the stream, but we're not fully aware of the events.

Now Im not going to dig into the differences here between a UF and a HF, aside from saying the UF only has the input / output queue, whereas the HF's have all the processing queues {PArsing -> Merging -> Typing -> Index } . This means for cases you dont need to parse / filter, or GUI, the UFs will stream much faster.

More reading : https://www.splunk.com/blog/2016/12/12/universal-or-heavy-that-is-the-question.html

ankithreddy777
Contributor

Hi esix,
Thank you for the response. Do we need to send uncooked data from far left UF to intermediate UF right?. So that intermediate UF collects data via tcp input and send data to indexers which will now be half baked.
So intermediate forwarder do not know from which source file the first UF has picked the data from right?

0 Karma

esix_splunk
Splunk Employee
Splunk Employee

First, UFs aren’t capable of sending data unbaked (remember it’s not fully cooked.) you have to use a HF if you want that capability.

The UF will add meta data { host, source, etc } to the event chunks and pass them to the next host in line. It expects to send to a splunktcp reciever.

The key point here is the intermediate is not a tcp receiver, it’s a splunktcp receiver (or I believe the GUI has it under Forwarding and Receiving : http://docs.splunk.com/Documentation/Splunk/7.2.1/Forwarding/Enableareceiver )

If don’t configure it as a Splunk Receiver, it won’t work.

0 Karma

tiagofbmm
Influencer

You can still send raw data to 3rd party systems from the UF on a tcpout configuration

0 Karma

kmorris_splunk
Splunk Employee
Splunk Employee

The data coming from a Universal Forwarder would not be considered cooked. Cooked data would come out of a Heavy Forwarder and would be much larger than the original data. Heavy Forwarders used to be more commonly used as an intermediate forwarder, but it is better to use a Universal Forwarder as you have described.

This is the commonly recommended way of limiting the egress points to Splunk Cloud. I believe the rule of thumb is 2 threads (see docs on multiple processing pipelines: http://docs.splunk.com/Documentation/Forwarder/7.2.1/Forwarder/Configureaforwardertohandlemultiplepi...), per indexer. This could be 2 forwarders per indexer, or one forwarder with multiple pipelines.

Here are the docs for setting up an intermediate forwarder:

http://docs.splunk.com/Documentation/Forwarder/7.2.1/Forwarder/Configureanintermediateforwarder

0 Karma

Richfez
SplunkTrust
SplunkTrust

If you haven't, please see the docs @kmorris has supplied - this is a thing that is documented and works. Don't worry about 'cooked' vs. 'not cooked', it's a technical detail that isn't important. Just know that a UFs can be IFs. The only difference is outputs.

0 Karma

ankithreddy777
Contributor

We have requirement to ise UF on one server to send data to cloud. But due to security reasons we need to forward to other UF/HF which inturn send data to cloud. We are opting out of HF as it is parsing data and resulting in high Traffic to cloud

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Why do you need intermediate UFs?

---
If this reply helps you, Karma would be appreciated.
0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...