Deployment Architecture

Horizontal scaling using the load balancing with universal forwarder

yuwtennis
Communicator

Hi!

I would like to ask question with the search head and load balancing.

In the following environemnt,
1 search head
2 peer node (indexer)
1 forwader

                    search head
      peer node1                     peer node2
                    forwarder

We are considering setting up the forwarder load balancing to peer node 1 and 2.
Whenever, peer node 1 goes down, I believe that forwarder detect the outage
and start sending to peer node 2.

In this condition , I would like three questions.

  1. Can the forwarder automatically resume the connection to peer node 1?
  2. Does the search head detect the outage and start searching only peer node 2?
  3. When you consider about the reliability of the data, would it be recommended to set up the replication between the peer node or set up shared storage between the peer node?

Thank you for reading.

Yu

0 Karma
1 Solution

kristian_kolb
Ultra Champion

1.) No, well, the load balancing works so that the forwarder will change node every now and then (30 second intervals I believe). Of course, if a node is down, it will keep sending to the working one, but it will try to change nodes. This means that your data will be spread out over your existing indexers.

2.) When you search, the search head will send the query to all peer nodes, and each peer node will return 'its' results. These are then processed further on the Search Head, i.e. to draw a pie chart, send an alert or whatever. So this also means that if an indexer is down, you will have incomplete search results until all indexers are back up again.

3.) Index replication will basically eliminate the problem outlined in 2.) above. When the forwarder is sending events to node 1, node 1 will send them onto node 2. And vice versa. However this requires you to set up a so-called cluster, which requires another machine to act as the cluster master.

Hope this helps,

Kristian

View solution in original post

kristian_kolb
Ultra Champion

1.) No, well, the load balancing works so that the forwarder will change node every now and then (30 second intervals I believe). Of course, if a node is down, it will keep sending to the working one, but it will try to change nodes. This means that your data will be spread out over your existing indexers.

2.) When you search, the search head will send the query to all peer nodes, and each peer node will return 'its' results. These are then processed further on the Search Head, i.e. to draw a pie chart, send an alert or whatever. So this also means that if an indexer is down, you will have incomplete search results until all indexers are back up again.

3.) Index replication will basically eliminate the problem outlined in 2.) above. When the forwarder is sending events to node 1, node 1 will send them onto node 2. And vice versa. However this requires you to set up a so-called cluster, which requires another machine to act as the cluster master.

Hope this helps,

Kristian

yuwtennis
Communicator

Hello Kristian.

Thank you for the reply.
Your answer is very useful.

Thanks,
Yu

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

Splunk is officially part of Cisco

Revolutionizing how our customers build resilience across their entire digital footprint.   Splunk ...

Splunk APM & RUM | Planned Maintenance March 26 - March 28, 2024

There will be planned maintenance for Splunk APM and RUM between March 26, 2024 and March 28, 2024 as ...