Getting Data In

Send logs from UF to HF

Nawab
Path Finder

Hi,

 

I am trying to configure UF installed on windows machines to send logs to HF and then HF to forward these logs to indexer.

 

I found some questions but mostly they were very high level.

 

If someone can explain how will it work, that would be great.

Labels (2)
0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi @Nawab,

architecture is very easy: at least two HFs will work as concentrators to receive logs from UFs and forwardr them to the Indexers.

This is a best practice if you have to send logs to Splunk Cloud from an on-premise network or if you have a segregated network and you don't want to open many connections between UFs and IDXs.

Otherwise, I always prefer to directly send logs from UFs to IDXs.

Tha approach to pass throgh HFs could have another purpose: delegate the parsing jobs to different machines than IDXs to reduce their load, but only if the IDXs are overloaded, and in this case, you have to give more resources (CPUs) to the HFs.

About configuration: you have to configure as destination the HFs instead of the IDXs in UFs (outputs.conf); the HFs must be configured as receivers on the 9997 port from the UFs and as Forwarders (still on the 9997 port) to the IDXs.

On the HFs you can configure a Forwarder license to avoid to pay the license.

Only one attention point: don't use only one HF to concentrate logs, becasue in this way you have a Single Point of Failure.

Ciao.

Giuseppe

View solution in original post

Nawab
Path Finder

Thanks for your response. It solves the issue

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @Nawab,

architecture is very easy: at least two HFs will work as concentrators to receive logs from UFs and forwardr them to the Indexers.

This is a best practice if you have to send logs to Splunk Cloud from an on-premise network or if you have a segregated network and you don't want to open many connections between UFs and IDXs.

Otherwise, I always prefer to directly send logs from UFs to IDXs.

Tha approach to pass throgh HFs could have another purpose: delegate the parsing jobs to different machines than IDXs to reduce their load, but only if the IDXs are overloaded, and in this case, you have to give more resources (CPUs) to the HFs.

About configuration: you have to configure as destination the HFs instead of the IDXs in UFs (outputs.conf); the HFs must be configured as receivers on the 9997 port from the UFs and as Forwarders (still on the 9997 port) to the IDXs.

On the HFs you can configure a Forwarder license to avoid to pay the license.

Only one attention point: don't use only one HF to concentrate logs, becasue in this way you have a Single Point of Failure.

Ciao.

Giuseppe

datadevops
Path Finder

Hi there,

Understanding the Workflow:

  • Universal Forwarder (UF):
    • Installed on Windows machines.
    • Collects logs from various sources on the machine (e.g., Windows event logs, applications, files).
    • Forwards the collected logs to the Heavy Forwarder.
  • Heavy Forwarder (HF):
    • Acts as a central collection point for logs from multiple UFs.
    • Can perform filtering, transformation, and load balancing before forwarding logs to indexers.
    • Often used for:
      • Reducing network traffic to indexers by filtering low-priority logs.
      • Offloading log processing from resource-constrained UFs.
      • Providing redundancy and failover for log forwarding.
  • Indexer:
    • Stores and indexes the forwarded logs, making them searchable and analyzable in Splunk.

Tips:

  • Consider using deployment servers to automate Splunk UF configuration on Windows machines.
  • Leverage distributed search and indexes for efficient searching across geographically dispersed data.
  • Regularly update Splunk software and configurations to maintain security and performance.

~ If the reply helps, a Karma upvote would be appreciated 🙂

0 Karma
Get Updates on the Splunk Community!

Wondering How to Build Resiliency in the Cloud?

IT leaders are choosing Splunk Cloud as an ideal cloud transformation platform to drive business resilience,  ...

Updated Data Management and AWS GDI Inventory in Splunk Observability

We’re making some changes to Data Management and Infrastructure Inventory for AWS. The Data Management page, ...

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...