We currently have 2 independent single server instances of Splunk that we want to combine.
We plan on having 1 distributed Splunk environment which consist of 1 deployment server (a VM) ,2 indexers and 1 search head.
1) DO I merge the 2 indices from the old servers into one of the new indexers or do I copy each of the 2 indices in each of the new indexers?
2) Is a 8GB quadcpu VM ok for a deployment server?
If you are not using your current indexers and building a completely new environment, then I would recommend bringing the new indexers in and having your search head reference the old indexers until that data is no longer needed, as well as your new indexers. If you need to actually move your indexes between servers, this will require a support ticket to help you.
As long as your search head points at both indexers, your data stays available for however long you have it set in indexes.conf. Just make sure that BOTH indexers have the same indexes.conf and settings. Then you're set.
I understand that my search server must point to both indexers and that I can control the data retention period via index.conf. What I am asking about is
how do I move old data to new environment.
I would not moved nor copy the indexes to both indexers, just let both of them index their own data, and start forwarding in to both of them. The old data will work its way out over time, and the search head will be able to use both indexers for its searches.
As for connection limit, I believe it's 1024 by default, I would recommend a much higher number depending on need.
We keep data for 6 months on the current splunk environment; wehn we move to the consoldiated 2-indexers/1 search head environment, we would like to preserve the current (and old) data. DO we merge the current indexes from the 2 distint environments?
in answer number 1, do you mean copying the old indexes to both new indexers?
in answer number 2, what is the connection limit called in technical terms, in terms of red hat kernel parameter?