Getting Data In

Intermediate forwarder

splunkcol
Contributor

I need to implement splunk but the client does not want the windows and linux sources to send the logs directly to the indexer, they want an intermediate server to collect the logs from all sources, syslog, windows, linux and databases

I ever saw a video where they mention an intermediate Forwarder which rides with a Heavy Forwarder

It's possible?

For the windows agent I only put the IP of the HF in the assistant?

For the linux agent I configure outputs.conf to the IP of the HF?

What other considerations should I keep in mind?

0 Karma
1 Solution

gcusello
Legend

Hi @splunkcol,

Yes, you can configure an intermediate forwarder ("Heavy") to work as a concentrator to send logs to the Indexers; this is the usual architecture when you have Splunk Cloud but it can be used also for Splunk on Premise.

At first you need at least two Heavy Forwarders (not one), to avoid Single Points of Failure in your architecture.

Then, you have to divide your inputs in three classes:

  • logs from server (Windows or Linux it's the same),
  • syslogs,
  • Databases.

For the first item, at first you should define if you can install an agent (Universal Forwarder) on the target server or not.

If yes you can configure your Universal Forwarders to send logs to the Heavy Forwarders and Splunk guarantees Load Balancing and Fail Over.

If you cannot use Universal Forwarder you have to use :

  • WMI to take logs from Windows servers,
  • syslogs to take logs from Linux Servers.

My hint is: try always to use Universal Forwarder for many reasons:

  • Easy to manage,
  • more secure (WMI requires Domain credentials),
  • more efficient (UF encrypts and compress packets before sending to HF);
  • no lost data (there's a local cache when connection with Indexers is blocked).

For syslogs, you have to use (in addition to the two Heavy Forwarders) a Load Balancer to distribute load between HFs and manage Fail Over, if you haven't you can use a DNS Policy.

For DB, if you have to take logs from files you can use the solution for servers with UF; if you have to tale logs from tables, you have to install on both the HFs the DB-Connect App and configure it.

Ciao.

Giuseppe

View solution in original post

gcusello
Legend

Hi @splunkcol,

Yes, you can configure an intermediate forwarder ("Heavy") to work as a concentrator to send logs to the Indexers; this is the usual architecture when you have Splunk Cloud but it can be used also for Splunk on Premise.

At first you need at least two Heavy Forwarders (not one), to avoid Single Points of Failure in your architecture.

Then, you have to divide your inputs in three classes:

  • logs from server (Windows or Linux it's the same),
  • syslogs,
  • Databases.

For the first item, at first you should define if you can install an agent (Universal Forwarder) on the target server or not.

If yes you can configure your Universal Forwarders to send logs to the Heavy Forwarders and Splunk guarantees Load Balancing and Fail Over.

If you cannot use Universal Forwarder you have to use :

  • WMI to take logs from Windows servers,
  • syslogs to take logs from Linux Servers.

My hint is: try always to use Universal Forwarder for many reasons:

  • Easy to manage,
  • more secure (WMI requires Domain credentials),
  • more efficient (UF encrypts and compress packets before sending to HF);
  • no lost data (there's a local cache when connection with Indexers is blocked).

For syslogs, you have to use (in addition to the two Heavy Forwarders) a Load Balancer to distribute load between HFs and manage Fail Over, if you haven't you can use a DNS Policy.

For DB, if you have to take logs from files you can use the solution for servers with UF; if you have to tale logs from tables, you have to install on both the HFs the DB-Connect App and configure it.

Ciao.

Giuseppe

View solution in original post