Getting Data In

Why is our Heavy forwarder not receiving (or forwarding)?

fatsug
Contributor

Hello community

In our distributed environemnt we have a few

Hello community

In our distributed environment we have a few heavy forwarders set up to deal with zone boundaries and whatnot. Silly enough of me, I assumed these would all be configured and humming along, though it turned out that not a single one of them where actually being used.

I have looked through the manual, as well as the forum here, though I am still somewhat confused regarding the setup and configuration needed. So I’ll take this step-by-step.

We have a universal forwarder set up in a Linux machine set to collect some sys/os logs and a filewatch for application log. Now, the UF connects to the deployment server and fetches the configuration, so far so good. Though nothing shows up on indexers and/or search heads.

First of all, I noticed that “Receive data” on the HF was empty, I assume there should be a port listed here so I added the standard port. After this, the server could “curl” connect to the HF, so this seemed like a fantastic start. However, still no log.

The local splunkd log in the UF shows:

07-08-2022 13:36:07.718 +0200 ERROR TcpOutputFd [4105329 TcpOutEloop] - Connection to host=<ip>:<port> failed

07-08-2022 13:36:07.719 +0200 ERROR TcpOutputFd [4105329 TcpOutEloop] - Connection to host=<ip>:<port> failed

So traffic is allowed though still the UF cannot connect to HF.

From what I can tell from other threads, I also need to have the same apps as deployed on the UF installed on the HF? Or am I misinterpreting this? Could this explain the failed connections? I have the inputs correct on the UF, I have the outputs.conf pointing at the HF. The HF sends _internal to indexers so that seems ok. It is just not accepting connections from the UF.

What exactly do I need to have on the HF so that log can be “redirected” from UF to IX?

Labels (2)
0 Karma
1 Solution

PickleRick
SplunkTrust
SplunkTrust

You mix different things. If I remember correctly, "Forwarded inputs" is the option responsible for deploying inputs to forwarders by tbe deplpyment server and has nothing to do with connectivity between UF and HF.

In your case for the UF-HF connection to work you must have:

1) properly defined input on HF

2) properly defined output on UF

3) connectivity between UF and HF

Deployment server is a completely different cup of tea.

View solution in original post

gcusello
SplunkTrust
SplunkTrust

Hi @fatsug,

you have a main issue: you have to enable receiving.

if you have an HF that works as a concentrator of logs from other Forwarders (HFs or UFs) you have to enable it to receive data on a port (by default 9997).

If ou use it also as syslog receiver, you have also to enable it the nerwork inputs to receive logs.

In addition, you have to configure on it the Forwardering option, to send logs to the Indexers.

About Apps, they depend on what you do on your logs: if you need elaborate them (parsing) you need to have the same Add-ons that you have on Indexers, because parsing is don on HFs.

About management, you can manage them by Deployment Servers like anothe Forwarder.

The message on UFs you shared says that the UFs haven't any destination to send their logs because you didn't enabled the reciving port on HF.

Ciao.

Giuseppe

fatsug
Contributor

Hello @gcusello 

Thank you for the feedback!

Yup, Recieving option was not present though I enabled port 9997 and then a curl-check showed that I was able to connect from server to heavy forwarder.

Forwarding was already set up, I see indexers configured on the heavy forwarder and _interal log from the the heavy forwarder is being indexed and is searchable.

This is where my mind breaks a little. I can curl/telnet connect to the HF from the server with the UF. The HF can send log to indexers as I can see _internal logg searchable on the HF. Why can't the UF connect to the HF and send logg?

The UF connects to DS and puls down config. Forwarding is enabled on the HF, reviecing is enabled on the HF. HF sends log to indexers, though UF is not allowed to connect to HF.

One thing that stumps me is that the "Forwarded Inputs" shows nothing present, is this something that I need to set up? Because in my mind, having the UF send log to the HF, it would just relay the information to indexers.

// Gustaf

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @fatsug,

there's something tha I don't understand:

when you enabled receiving on HF, did you started to receive logs?

when you connect by telnet to the HF from the UF, are you using the 9997 port I suppose.

then are you sure that you see the _internal logs of the HF on the HF itself, or you can see them on Indexers/Search Head?

If you forwardr all logs from HF and you haven't a local indexing, you cannot see anything on HF.

About forwarder inputs, forget it.

The check to do is: can you see UFs logs on IDX/SH?

Can you share outputs.conf of am UF?

Ciao.

Giuseppe

fatsug
Contributor

Hello again @gcusello 

No, not really.

So, I was receiving _internal from the HF. Nothing gets indexed/stored on the HF (only forwarded). I was unable to connect to the HF from the server with the UF, though our network admins ensured me everything was open.

I noticed that there were no open ports on the HF, opening a port allowed me to connect nc/telnet/curl but the UF still bounded. Only “Splunk log” from the HF reached the indexers. Nothing else went through.

So, the only way forward was a painful file-by-file diff. While doing this I noticed that there was a particular input file missing, this file had the port info as well as some SSL configuration, likely what made the UF bounce.

After adding the HF to the appropriate server class on the DS, the HF accepted the connection from the UF and the HF happily forwarded whatever log the UF was sending.

Thank you again for you help and input, much appreciated.

0 Karma

fatsug
Contributor

Or maybe "Forwarded inputs" is not what I think it is.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

You mix different things. If I remember correctly, "Forwarded inputs" is the option responsible for deploying inputs to forwarders by tbe deplpyment server and has nothing to do with connectivity between UF and HF.

In your case for the UF-HF connection to work you must have:

1) properly defined input on HF

2) properly defined output on UF

3) connectivity between UF and HF

Deployment server is a completely different cup of tea.

fatsug
Contributor

Hello @PickleRick 

It was the "inputs" on the HF that caused the problem

At some point, the HF was setup though missing a serverclass with essential input. Once I found the "diff", adding the serverclass to the HF and grabbing a cup of coffe, log started flowing!

Thank you very much for your help and input

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...