Deployment Architecture

How to configure replication on multisite indexer cluster on Splunk 6.5.2?

RobinHearch
New Member

Good evening,

In my company, I tried to install Splunk Enterprise in multisite mode.
I have gone through all the documentation on this subject and I do not understand how it works.

My architecture:
Site 1:
1 master (+ license)
4 indexers (index: sandbox)
1 search head
1 universal forwarder
Site 2:
4 indexers (index : sandbox)

My config site_replication_factor is: origin1; Site1: 1 site2: 1: total: 2 (i tried some config, failure
My objective is to be tolerant to breakdowns (on 1,2,3,4 nodes of the same site).
If I lose a node, replication allows you to continue. It does not work for me.

My process:
T = 0: The FW of the site1 sends the data (index: sandbox on the site1).
T = 1: The data arrives at node 1 of the site1.
T = 3: Site1 sends to node 2 of site2.
I check the volumetry in / var / lib / splunk / sandbox (wath -n 1 of --max-depth = 1.).
The data is deposited on the two nodes 1 (site1) and 2 (site2).
T = 4: On the master interface, "index deployment", I see that node 1 (site1) has 1000 events.
But node 2 (of site2) contains 0 events. Why because I remember that "/var/lib/splunk/sandbox" contains data on node2 (site2) ?
T = 5: I put offline the node1 (of site1). The volumetry of node2 (site2) "/var/lib/splunk/sandbox" decreases and empties in seconds.
Then, the master interface for the bacasable index contains 0 events. No replication.
T = 6: Node 1 is started. It does not go up the 1000 events it contained before being offline.

I do not understand how it works.

Is it possible to replicate data and keep it?
Can you help me set up clustering mode, including site_replication_factor at first?

Tkx
RH

0 Karma

adonio
Ultra Champion

Hello Robin Hearch,
please read this doc carefully and apply process: http://docs.splunk.com/Documentation/Splunk/6.5.2/Indexer/Multisitearchitecture
your configuration on server.conf in Cluster Master is as follow:

[clustering]
mode = master
# pass4SymmKey = password
multisite = true
available_sites = site1, site2
site_replication_factor = origin:2, total:4
site_search_factor = origin:1,  total:2

this configuration will save 2 copies on your origin site (where data first lands) meaning 1 original and one copy on same site
Also it will send 2 copies to the second site.
it will have 1 searchable copy on each site for search affinity

one last thing, make sure that your indexers has the right configuration in server.conf

[general]
site = site<n>

[clustering]
mutlisite = true

Hope it helps

0 Karma
Get Updates on the Splunk Community!

Monitoring MariaDB and MySQL

In a previous post, we explored monitoring PostgreSQL and general best practices around which metrics to ...

Financial Services Industry Use Cases, ITSI Best Practices, and More New Articles ...

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Splunk Federated Analytics for Amazon Security Lake

Thursday, November 21, 2024  |  11AM PT / 2PM ET Register Now Join our session to see the technical ...