Deployment Architecture

Horizontal scaling using the load balancing with universal forwarder

yuwtennis
Communicator

Hi!

I would like to ask question with the search head and load balancing.

In the following environemnt,
1 search head
2 peer node (indexer)
1 forwader

                    search head
      peer node1                     peer node2
                    forwarder

We are considering setting up the forwarder load balancing to peer node 1 and 2.
Whenever, peer node 1 goes down, I believe that forwarder detect the outage
and start sending to peer node 2.

In this condition , I would like three questions.

  1. Can the forwarder automatically resume the connection to peer node 1?
  2. Does the search head detect the outage and start searching only peer node 2?
  3. When you consider about the reliability of the data, would it be recommended to set up the replication between the peer node or set up shared storage between the peer node?

Thank you for reading.

Yu

0 Karma
1 Solution

kristian_kolb
Ultra Champion

1.) No, well, the load balancing works so that the forwarder will change node every now and then (30 second intervals I believe). Of course, if a node is down, it will keep sending to the working one, but it will try to change nodes. This means that your data will be spread out over your existing indexers.

2.) When you search, the search head will send the query to all peer nodes, and each peer node will return 'its' results. These are then processed further on the Search Head, i.e. to draw a pie chart, send an alert or whatever. So this also means that if an indexer is down, you will have incomplete search results until all indexers are back up again.

3.) Index replication will basically eliminate the problem outlined in 2.) above. When the forwarder is sending events to node 1, node 1 will send them onto node 2. And vice versa. However this requires you to set up a so-called cluster, which requires another machine to act as the cluster master.

Hope this helps,

Kristian

View solution in original post

kristian_kolb
Ultra Champion

1.) No, well, the load balancing works so that the forwarder will change node every now and then (30 second intervals I believe). Of course, if a node is down, it will keep sending to the working one, but it will try to change nodes. This means that your data will be spread out over your existing indexers.

2.) When you search, the search head will send the query to all peer nodes, and each peer node will return 'its' results. These are then processed further on the Search Head, i.e. to draw a pie chart, send an alert or whatever. So this also means that if an indexer is down, you will have incomplete search results until all indexers are back up again.

3.) Index replication will basically eliminate the problem outlined in 2.) above. When the forwarder is sending events to node 1, node 1 will send them onto node 2. And vice versa. However this requires you to set up a so-called cluster, which requires another machine to act as the cluster master.

Hope this helps,

Kristian

yuwtennis
Communicator

Hello Kristian.

Thank you for the reply.
Your answer is very useful.

Thanks,
Yu

0 Karma
Get Updates on the Splunk Community!

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...