Getting Data In

Is it possible for an intermediate forwarder which rides with a heavy forwarder?

splunkcol
Contributor

I need to implement splunk but the client does not want the windows and linux sources to send the logs directly to the indexer, they want an intermediate server to collect the logs from all sources, syslog, windows, linux and databases

I ever saw a video where they mention an intermediate Forwarder which rides with a Heavy Forwarder

It's possible?

For the windows agent I only put the IP of the HF in the assistant?

For the linux agent I configure outputs.conf to the IP of the HF?

What other considerations should I keep in mind?

0 Karma
1 Solution

gcusello
Esteemed Legend

Hi @splunkcol,

Yes, you can configure an intermediate forwarder ("Heavy") to work as a concentrator to send logs to the Indexers; this is the usual architecture when you have Splunk Cloud but it can be used also for Splunk on Premise.

At first you need at least two Heavy Forwarders (not one), to avoid Single Points of Failure in your architecture.

Then, you have to divide your inputs in three classes:

  • logs from server (Windows or Linux it's the same),
  • syslogs,
  • Databases.

For the first item, at first you should define if you can install an agent (Universal Forwarder) on the target server or not.

If yes you can configure your Universal Forwarders to send logs to the Heavy Forwarders and Splunk guarantees Load Balancing and Fail Over.

If you cannot use Universal Forwarder you have to use :

  • WMI to take logs from Windows servers,
  • syslogs to take logs from Linux Servers.

My hint is: try always to use Universal Forwarder for many reasons:

  • Easy to manage,
  • more secure (WMI requires Domain credentials),
  • more efficient (UF encrypts and compress packets before sending to HF);
  • no lost data (there's a local cache when connection with Indexers is blocked).

For syslogs, you have to use (in addition to the two Heavy Forwarders) a Load Balancer to distribute load between HFs and manage Fail Over, if you haven't you can use a DNS Policy.

For DB, if you have to take logs from files you can use the solution for servers with UF; if you have to tale logs from tables, you have to install on both the HFs the DB-Connect App and configure it.

Ciao.

Giuseppe

View solution in original post

gcusello
Esteemed Legend

Hi @splunkcol,

Yes, you can configure an intermediate forwarder ("Heavy") to work as a concentrator to send logs to the Indexers; this is the usual architecture when you have Splunk Cloud but it can be used also for Splunk on Premise.

At first you need at least two Heavy Forwarders (not one), to avoid Single Points of Failure in your architecture.

Then, you have to divide your inputs in three classes:

  • logs from server (Windows or Linux it's the same),
  • syslogs,
  • Databases.

For the first item, at first you should define if you can install an agent (Universal Forwarder) on the target server or not.

If yes you can configure your Universal Forwarders to send logs to the Heavy Forwarders and Splunk guarantees Load Balancing and Fail Over.

If you cannot use Universal Forwarder you have to use :

  • WMI to take logs from Windows servers,
  • syslogs to take logs from Linux Servers.

My hint is: try always to use Universal Forwarder for many reasons:

  • Easy to manage,
  • more secure (WMI requires Domain credentials),
  • more efficient (UF encrypts and compress packets before sending to HF);
  • no lost data (there's a local cache when connection with Indexers is blocked).

For syslogs, you have to use (in addition to the two Heavy Forwarders) a Load Balancer to distribute load between HFs and manage Fail Over, if you haven't you can use a DNS Policy.

For DB, if you have to take logs from files you can use the solution for servers with UF; if you have to tale logs from tables, you have to install on both the HFs the DB-Connect App and configure it.

Ciao.

Giuseppe

ojay
Path Finder

@gcusello wrote:

Hi @splunkcol,

Yes, you can configure an intermediate forwarder ("Heavy") to work as a concentrator to send logs to the Indexers; this is the usual architecture when you have Splunk Cloud but it can be used also for Splunk on Premise.

At first you need at least two Heavy Forwarders (not one), to avoid Single Points of Failure in your architecture.

Then, you have to divide your inputs in three classes:

  • logs from server (Windows or Linux it's the same),
  • syslogs,
  • Databases.

For the first item, at first you should define if you can install an agent (Universal Forwarder) on the target server or not.

If yes you can configure your Universal Forwarders to send logs to the Heavy Forwarders and Splunk guarantees Load Balancing and Fail Over.

If you cannot use Universal Forwarder you have to use :

  • WMI to take logs from Windows servers,
  • syslogs to take logs from Linux Servers.

My hint is: try always to use Universal Forwarder for many reasons:

  • Easy to manage,
  • more secure (WMI requires Domain credentials),
  • more efficient (UF encrypts and compress packets before sending to HF);
  • no lost data (there's a local cache when connection with Indexers is blocked).

For syslogs, you have to use (in addition to the two Heavy Forwarders) a Load Balancer to distribute load between HFs and manage Fail Over, if you haven't you can use a DNS Policy.

For DB, if you have to take logs from files you can use the solution for servers with UF; if you have to tale logs from tables, you have to install on both the HFs the DB-Connect App and configure it.

Ciao.

Giuseppe


How do I set up and configure the loadbalancing of the Universal Forwarder between the two Heavy Forwarder?

- Is a 3rd party loadbalancer advised? Or can the Universal Forwarder manage themselves?

- Is it advised that the Heavy Forwarder are in an active passive mode? What possibilities are there?

0 Karma

gcusello
Esteemed Legend

Hi @ojay,

you don't need an external Load Balancer, you can use the same mechanism of auto load balancing that you use to directly send data to Indexers, even if you send to two Heavy Forwarders (for more infos see at https://docs.splunk.com/Documentation/Splunk/8.2.6/Forwarding/Setuploadbalancingd).

It's always a good practice to have at least two HFs to avoid Single Point of Failures.

If you want, you could also use two Universal Forwarders as intermediate, this choice depends on if you want to elaborate data before Indexers or not (UFs cannot elaborate data, HFs can).

About the active/passive mode, the best approach is two HFs always active that distribute load in normal work and manage fail over in maintenance or fail conditions.

Ciao.

Giuseppe

0 Karma
Get Updates on the Splunk Community!

Security Highlights | November 2022 Newsletter

 November 2022 2022 Gartner Magic Quadrant for SIEM: Splunk Named a Leader for the 9th Year in a RowSplunk is ...

Platform Highlights | November 2022 Newsletter

 November 2022 Skill Up on Splunk with our New Builder Tech Talk SeriesCan you build it? Yes you can! *play ...

Splunk Education - Fast Start Program!

Welcome to Splunk Education! Splunk training programs are designed to enable you to get started quickly and ...