Hi, I understand that it depends on the ingestion rate and the search patterns so, for the most part, i'm happy with "it depends" @isoutamo 😉 The grey area for me is either or not I should compensate for the increase on the number of buckets that may result from adjusting to 1 day buckets it is general guidance to change the default values for maxHotBuckets and maxWarmDBCount?
... View more
We're in the process of setting maxDataSize due to the fact that some hot buckets are groing too large. We only have hot and warm storage, still working our way in having some sort of cold storage. As of today we control the size of the index with maxTotalDataSizeMB (i.e. max size per indexer taking in consideration the number of replicas) and frozenTimePeriodInSecs but the end result is that when the buckets are frozen huge chuncks of data go away (like 30 days in some cases). My doubt here is, should we mess with the maxHotBuckets and maxWarmDBCount setting cause we are going to have a lot of 1 day buckets instead of fat buckets that span multiple days. Or should we follow the mantra DON'T EDIT UNLESS YOU'RE TOLD TO? Another question would be, for setting maxDataSize the method is to pick the ingestion per day number and divide per the number of indexers in the cluster? since the forwarders loadbalance between all of the indexers this seems the most reasonable approach to take?
... View more
In our setup we have a searchhead cluster with no search affinity (site0) and a multisite indexer clusters (site1/site2). Now its time for some expansion and although we already expanded the searchhead cluster it is a first for the indexer cluster. Search Tier uses the cluster master (CM) to discover the indexers. Forwarding Tier uses the indexerDiscovery i.e. also uses the cluster master (CM) to discover the indexers. The process to spawn a new indexer is pretty much automated by now and from the https://docs.splunk.com/Documentation/Splunk/8.0.4/Indexer/Addclusterpeer it is easy to understand why a rebalance may be required. Only thing that bothers me a bit is that from the Forums there is a general guidance to putt the CM in maintenance mode (https://community.splunk.com/t5/Deployment-Architecture/Adding-a-new-indexer-to-the-indexer-cluster/td-p/298199). Any idea why it is recommended to put the CM in maintenance? Afaik the maintenance only stops the bucket fix-up operations? There's any other hidden operation that maintenance mode does? What does maintenance mode makes for a better/safer procedure?
... View more
Suppose we're setting a multisite indexer cluster with 4 nodes in site1 and 3 nodes in site2: [clustering]
multisite = true
available_sites = site1,site2
site_replication_factor = origin:1, total:2
site_search_factor = origin:1, total:2 What happens if we loose for instance site2 given that all sites are non-explicit sites? According to my understanding of the documentation the cluster fix-up process will "reserve" bucket copies in site1 in preparation for the return of site2 peers given that total - explicit sites equals 2 i.e. "the search and replication factors are sufficiently large" as the documentation says: For non-explicit sites, the cluster reserves one searchable copy if the total components of the site's search and replication factors are sufficiently large, after handling any explicit sites, to accommodate the copy. (If the search factor isn't sufficiently large but the replication factor is, the cluster reserves one non-searchable copy.) Is my understanding of the documentation correct? or i'm missing something? Is there any failover timer that could be configure so the cluster fix-up process gives some room for site2 to recover before the "reserve" bucket copies start to be created? Lastly should we reserve some storage in site1 to accommodate for an event where "reserve" bucket copies are created? Is there any golden number that we could use for the amount of storage that should be reserved? Thanks in advance
... View more
We're working on the setup of a new Splunk installation. As an intermediate step during the migration work we would like to point the old Indexer Cluster to the new License Master.
The problem we're facing is that, in the old installation we're not using SSL for port 8089 communications and in the new installation we are. To sum up, SSL is not configured in the client (the old Indexer Cluster) but is enabled in new License Master.
After setting the master_uri in [license] stanza to https://newlm.com:8089 (in /opt/splunk/etc/system/local/server.conf) the following messages started to popup:
Failed to contact license master: reason='Unable to connect to license master=https://newlm.com:8089 Error connecting: SSL not configured on client
As a side note openssl output looks clean:
>openssl s_client -connect newlm.com:8089 -CAfile /opt/splunk/etc/auth/cacert.pem Verify return code: 0 (ok)
Anyway to set up this mixed environment? Could we possibly use SSL just for the communication with the License Master? Could these calls be "proxied" by a License Slave? What is the minimum setup to support this kind of communication? It would be the bummer if we have to set up the entire old installation for SSL just to contact the License Master!
Thanks in advance.
... View more