Installation

How can I setup the fail over for a Splunk Heavy Forwarder monitoring a folder that contains critical logs?

vasanthmss
Motivator

Hi All,

A Splunk Heavy Forwarder Monitoring a folder with contains critical logs, How can I setup the forwarder fail over?

  1. Is it possible to monitor a shared folder by two or more Splunk forwarder?
  2. Where as secondary should not share the duplicate records while primary is active....
  3. Secondary should be active while primary goes down.

Thanks!!!!!!!

V
Labels (1)
1 Solution

Drainy
Champion

The other answer misses the potential issue of a forwarder re-forwarding data that has already been seen.

In these sorts of setups you need to take a step back and consider the following;

1) If these logs are critical, consider how they are distributed between systems. A common setup may be to use a load balancer to split syslog between two servers, these can generally ping a port to check that the service is available and so if the forwarder goes offline it will send all data to a secondary server

2) Again, if they really are critical you could always setup two forwarders sending to two different indexes on the same indexer. This way you could maintain a primary and secondary copy

3) No service I have ever seen runs at 100%, or even has 100% as a reasonable SLA. Downtime should be expected at some point, normal proceedures to handle process monitoring and a forwarder going offline should already be in place. Some monitoring tools can also be configured to automatically restart a failed process. In this case I would simply accept that as a best endeavor, short of custom scripting some hideous to maintain piece of custom sticky plaster work 🙂
Bear in mind that the forwarder will just pick up where it left off once its restarted.

View solution in original post

Drainy
Champion

The other answer misses the potential issue of a forwarder re-forwarding data that has already been seen.

In these sorts of setups you need to take a step back and consider the following;

1) If these logs are critical, consider how they are distributed between systems. A common setup may be to use a load balancer to split syslog between two servers, these can generally ping a port to check that the service is available and so if the forwarder goes offline it will send all data to a secondary server

2) Again, if they really are critical you could always setup two forwarders sending to two different indexes on the same indexer. This way you could maintain a primary and secondary copy

3) No service I have ever seen runs at 100%, or even has 100% as a reasonable SLA. Downtime should be expected at some point, normal proceedures to handle process monitoring and a forwarder going offline should already be in place. Some monitoring tools can also be configured to automatically restart a failed process. In this case I would simply accept that as a best endeavor, short of custom scripting some hideous to maintain piece of custom sticky plaster work 🙂
Bear in mind that the forwarder will just pick up where it left off once its restarted.

kml_uvce
Builder

you can send files data via syslog or universal forwarder or any other in both heavy forwarder but keep one heavy forwarder down and another up and when one(up) forwarder down then make down forwarder up

0 Karma

vasanthmss
Motivator

Making forwarders up and down would be manul? Expecting some auto config

V
0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...