Splunk Enterprise

indexer cluster topology across two datacenters

noybin
Communicator

Hello,

I am implementing Splunk.

1 Search Head
An indexer cluster with 2 peers
1 Master Node
X Heavy Forwarders

I have to deploy them across 2 datacenters.

Which is the best way to distribute these objects o the datacenters?

Thank you very much.

0 Karma

jkat54
SplunkTrust
SplunkTrust

I think you need 2 indexers in each data center for SF2/RF2 to be truly HA in the case of a data center going down. Also 2 search heads in each. So here's how I would do it:

Data Center A (Site1):
2 search heads
1 cluster master / search head deployer / deployment server
2 peers (indexers)

Data Center B (Site2):
2 search heads
1 standby (always offline unless needed) cluster master / search head deployer / deployment server
2 peers (indexers)

Multi-site Cluster Enabled
site_replication_factor = origin:1, site1:1, site2:1, total:2
site_search_factor = origin:2, site1:2, site2:2, total:4

The site_replication_factor guarantees there is at least 1 copy of the (non-searchable, but recoverable) buckets in each data center.

The site_search_factor guarantees there are at least 2 copies of the searchable buckets in each data center.

You storage will be calculated as follows:
Daily Ingestion Volume x 4 (SF) x 0.35 (SF Storage Ratio) + Daily Ingestion Volume x 2 (RF) x 0.15 (RF Storage Ration) * Retention in Days = Total Storage Needed

Divide that by 4 to get Total Storage Needed per Indexer

You will never achieve HA with the SHC and only 2 sites / data centers. This is because the SHC needs a majority vote to elect a new captain if the captain goes down. So for example if you have 4 search heads, 2 in site 1, 2 in site 2, and site 2 goes down... now you have just 2 of the original 4 search heads in Site 1 and they cant get a majority vote for who should be the captain. What you could do is put 3 search heads in site 1 and 2 search heads in site 2. Then if site 2 goes offline and one of its search heads was the captain, theres a majority of 3 search heads in site 1. Of course you may not need this level of HA, and are ok with having to manually elect a captain via CLI. In which case you could just put 2 in each data center.

As for the Cluster Master / Deployment Server / SHC Deployer... there's really not a super easy solution to make these HA.

You can put multiple deployment servers behind a load balancer and point all the forwarders to the load balancer VIP, but you cant do the same with the cluster master or the deployer without another layer of cloning / syncing (to be handled outside of Splunk's toolset). Typically VMware's vmotion is a good option for these devices.

Sorry to add this as an answer but we're running out of space for comments on the original question.

0 Karma

noybin
Communicator

Thanks for your reply.

The client wants HA on indexing layer, not Search layer.

So we are creating indexer cluster. And of possible Heavy Forwarder cluster

Is it possible to cluster the HF?

Thank you very much

0 Karma

jkat54
SplunkTrust
SplunkTrust

No need to. On all the universal forwarders you can put as many servers as you want in outputs.conf and the software will load balance between them all.

So say you have 50 universal forwarders each with the same outputs.conf that lists two heavy forwarders... then those heavy forwarders have all the indexers in their data center listed in their outputs.conf... this would give you HA on the heavy forwarder layer.

We typically recommend against heavy forwarders however. Unless youre redacting the data you can just send from the universal forwarders directly to the indexers. If you're dealing with limited firewall openings in a super secure environment then you can always go from universal forwarder to universal forwarder then to indexers. And you'd achieve HA in the same way... multiple destinations in the outputs.conf.

0 Karma

jkat54
SplunkTrust
SplunkTrust

Let me reiterate that the heavy forwarder is only needed in a few situations.

  1. If your redacting data before sending it across different network segments. I know of several financial institutions that do this.

  2. If you want to index the data on the heavy forwarder and also forward it. I don't know anyone who does this.

  3. Sometimes they are used as data aggregation points. Such as a single device that forwards data from a bunch of universal forwarders to Splunk cloud. However, this can easily be achieved with a universal forwarder instead and shouldn't be done with a heavy forwarder unless number 1 or 2 above is also needed.

0 Karma

noybin
Communicator

Thanks.

The problem is that I don't have UF and I probably won't be able to install any.

How can I deal with it?

Thanks again

0 Karma

jkat54
SplunkTrust
SplunkTrust

How do you plan to bring the data in?

What types of data are you indexing?

0 Karma

jkat54
SplunkTrust
SplunkTrust

For example, to pull windows event logs, you need a UF on the server... or you can pull it using a heavy forwarder with Splunk running as a service account that's admin on all the windows servers (not best practice)

If you want to ingest Linux logs, you'll need UFs...

Only scenario where you might not need universal forwarders is if you're just ingesting network device data. Typically the best practice there is UF on syslog tier.

0 Karma

noybin
Communicator

I am going to ask if I am allowed to install UF then.

In that case, can I configure it to send events to 1 indexer and only if that indexer is down start sending automatically to the other?

Thank you very much!!

0 Karma

noybin
Communicator

The client told me that some of the devices sending events are F5 BigIp. And it is not possible to install Universal Forwarders in them.

  1. In that case how can I achieve HA?

  2. Is it possible to configure a Heavy Forwarder for sending events to one indexer and ONLY IF that indexer is down, start sendign the events to the other indexer?

Thank you very much.

0 Karma

s2_splunk
Splunk Employee
Splunk Employee

The best practice for ingesting syslog data into Splunk is to configure your network devices (like F5s, etc) to send to a pair of syslog-ng servers behind a load-balancer, setup appropriate syslog policies to write the data to local files/directories based on their sourcetype, and have universal forwarders process these log files. This gives you HA on the ingestion side.

Regarding #2: The only way to achieve this currently, is to deploy a multi-site cluster and use indexer discovery & the site failover capability. There is currently no other way to configure a fallback target; the forwarder will loop through all defined indexers in outputs.conf (i.e. it load-balances)

0 Karma

noybin
Communicator

Thank you very much.

Can a multi-site cluster be deployed with only 1 Search head that search throuh both sites?

Thanks again

0 Karma

s2_splunk
Splunk Employee
Splunk Employee

A cluster search head is configured with the Cluster Master and will search all indexers known to the cluster, yes.

0 Karma

noybin
Communicator

I was talking about a single Search Head, not a cluster SH.

Thanks

0 Karma

s2_splunk
Splunk Employee
Splunk Employee

I was as well. I am not talking about a Search Head Cluster, I am referring to a SH that is used to search an indexer cluster (aka. a cluster search head). Sorry for the confusion.

0 Karma

noybin
Communicator

Can you paste me an example of the universal forwarder inputs.conf for monitoring a directory and forwarding syslog on Linux systems please?

0 Karma

s2_splunk
Splunk Employee
Splunk Employee

There is way too little information here to give a credible answer.
- What are your requirements that drive "I have to deploy them across 2 data centers"? HA? DR? What?
- What is your daily data ingest volume
- What is the network latency between your data centers?
- What are the heavy forwarders for?

Note that an indexer cluster with just two peers cannot remain in healthy state if you lose a single indexer, you need at least three indexers to maintain an RF=2/SF=2 cluster in healthy state during peer outage.

But like I said, it would help to understand your design goals and constraints better, before giving you any kind of advice.

0 Karma

noybin
Communicator
  1. HA
  2. 5gb-7gb
  3. 0.5ms
  4. Events Filtering.

I didn't understand what you said about: "Note that an indexer cluster with just two peers cannot remain in healthy state if you lose a single indexer, you need at least three indexers to maintain an RF=2/SF=2 cluster in healthy state during peer outage."

Thank you very much

0 Karma

s2_splunk
Splunk Employee
Splunk Employee

True HA cannot really be achieved with a single search head, so I am going to assume you need to ensure you can search your indexed data in case you have a data center outage.

The simplest is a two-node cluster, with one indexer in each data center; configured with RF=2/SF=2. This ensures that each indexer has a searchable copy of each bucket.
But that really only solves half of your problem, because you still need to have a failover plan for your search head and your cluster master node. Virtualization is your friend for these.

You don't need HFs to do event filtering, you can do that directly on the indexers just as well.

0 Karma

noybin
Communicator

Thank you!

One last question.

Is it or is it not recommended to use Heavy Forwarders in this scenario.

I think about buffering, filtering maybe.

What is the best practice?

Thank you very much.

0 Karma

noybin
Communicator

Another one:

If I include a HF in each datacenter. And an indexer goes down. Can the HF automatically start sending the events to the indexer in the other Datacenter?

In that way I won't loose events when an indexer goes down in a datacenter.

Is that possible?
Is it recommended?

The problem about adding more indexers is that I will need to backup all of them increasing the cost of the backup.

Thank you very much.

0 Karma
Get Updates on the Splunk Community!

SignalFlow: What? Why? How?

What is SignalFlow? Splunk Observability Cloud’s analytics engine, SignalFlow, opens up a world of in-depth ...

Federated Search for Amazon S3 | Key Use Cases to Streamline Compliance Workflows

Modern business operations are supported by data compliance. As regulations evolve, organizations must ...

New Dates, New City: Save the Date for .conf25!

Wake up, babe! New .conf25 dates AND location just dropped!! That's right, this year, .conf25 is taking place ...