Getting Data In

Moving indexers from centos to redhat8

sbhatnagar88
Path Finder

Hi Folks,

 

currently we have 4 physical indexers running on CentOS but since CentOS is EOL , plan it to migrate OS from CentOS to Redhat on same physical nodes. 

Cluster master is a VM and already running on Redhat. so we will not be touching CM.

What should be the approach here and how should we plan this activity ? Any high level steps would be highly recommended.

Labels (1)
0 Karma

johnhuang
Motivator

You can convert/upgrade in place, Red Hat has an utility (Convert2RHEL) that allows you to upgrade CentOS 7 to RHEL 8. We've done this across thousands of CentOS servers with various configurations and apps and had no issues.

0 Karma

sbhatnagar88
Path Finder

Thanks @johnhuang ,

Is that utility applicable on physical servers as well?

0 Karma

isoutamo
SplunkTrust
SplunkTrust
0 Karma

sbhatnagar88
Path Finder

Thanks @isoutamo  

 

So, I understood it correctly. We do not need to restore the backup. As soon as we add the detached node back to cluster, all configuration and data will be resynced as it is? correct me If my understanding is incorrect.

For me, its bit confusing as how the configuration files will be restored without restoring the backup?

Also, data might be replicated as soon as sync starts but it may take ages to complete the sync considering 4 TB of data. what do you think?

 

Thanks

0 Karma

isoutamo
SplunkTrust
SplunkTrust

If you are doing clean installation and want to use old node information you must restore at least splunk/etc directory. But then there could be some conflict with buckets etc. For that reason I prefer remove, clean up and create a new node which add to this cluster. 

If you are using that migrate on place Centos2RHEL as @johnhuang told, then if you have possibility to do that as offline it’s probably the safest option? If you cannot do it as offline, then maybe you could try to put CM in maintenance mode, then update/migrate one node then sync cluster, put it again maintenance mode and continue with next node etc. This could work, but it’s best if you can test it with lab/test environment? I don’t take any responsibility of these instructions as I haven’t done this by myself!

Hard to do any estimates for how long it takes as it’s depending for your hardware, disk speed network etc.

0 Karma

sbhatnagar88
Path Finder

Thanks @gcusello 

 

Here are the high level steps coming to my mind.

1. Take the backup of splunk mount point.

2. stop first physical node.

3. Format the existing OS and configure the new one.

4.  Restore the backup taken.

 

Let me know if I am missing something.

0 Karma

isoutamo
SplunkTrust
SplunkTrust
When this is a cluster then the master keeps books which buckets are in which node. When you are shutting down the server then master updates where are primaries secondaries etc. After that it will order other nodes to start fixing a SF and RF to fulfill current requirements. Basically this means that your backup is not fully compliant when you are trying to restore it. And the same will be happening for all those nodes.
I probably do this by removing one node from cluster, clean it, install OS, then splunk and just adding it as a new node into it. Of course this needs some additional space for indexes, but actually your planned way needs it too.
There are couple of old postings, how to replace old nodes in cluster, where you see the actual process.
r. Ismo
0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @sbhatnagar88 ,

no, it's correct: in linux you can tar and untar the full splunk home directory.

Remember only to mount all the partitions in the same mount point of the original, if you cannot, you have to modify the $SPLUNK_DB parameter in splunk-launch.conf.

Ciao.

Giuseppe

0 Karma

sbhatnagar88
Path Finder

Hi @gcusello ,

 

Thanks for the feedback. we are planning to keep exactly same mount points.

1. In that case, if we take backup of /splunk directory and restore it after new OS is build. will that restore all configuration and data as the original one? - pls answer.

2. Also, we planned to separate data and OS disk and only format the OS disk and once new OS is configured, restore the data disk. Do you think this approach will work? - pls answer

 

Thanks

Sushant

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @sbhatnagar88 ,

the mount point is relevant to be sure that the indexes.conf files and splunk-launch.conf files point to the correct mount points (the same of the old installation).

So, in thi case you can restore the old $SPLUNK_HOME folder and your installation will run exactly as before.

It's usual to have different file systems between system application (splunk) and data, what is your situation? what are the $SPLUNK_HOME and $SPLUNK_DB folders'

Ciao.

Giuseppe

0 Karma

sbhatnagar88
Path Finder

Hi @gcusello ,

 

In our case:

splunk home is /splunk

Splunk DB is /splunk/var/lib/splunk

Thanks

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @sbhatnagar88 ,

in this case you have in the same file system Splunk and its data.

Usually Splunk data are stored in a different file system in a different mount point.

In your case I hint to migrate the installation as it is; then you ll be able to plan to move the data (indexes in a different file system), I don't hint to do this in one step.

In other words the best practice is to have in different file systems:

  • / and the operative system,
  • /var,
  • Splunk,
  • Splunk data.

Ciao.

Giuseppe

 

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @sbhatnagar88 ,

I don't see any issue in this activity, plan it and migrate one server at a time.

Ciao.

Giuseppe

0 Karma

sbhatnagar88
Path Finder

Hi @gcusello ,

 

Thanks for the feedback..

Wanted to understand , if we are changing OS on first physical node after restoring the backup. So this node will not be running on Red hat but other 3 still running on centos. Will this node be part of clustering?

Can servers with different OS part of same cluster?

Thanks

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @sbhatnagar88 ,

it isn't a best practice to have different OSs but it can run for a momentary period, but Splunk must have the same version.

Ciao.

Giuseppe

0 Karma
Get Updates on the Splunk Community!

Take Your Breath Away with Splunk Risk-Based Alerting (RBA)

WATCH NOW!The Splunk Guide to Risk-Based Alerting is here to empower your SOC like never before. Join Haylee ...

SignalFlow: What? Why? How?

What is SignalFlow? Splunk Observability Cloud’s analytics engine, SignalFlow, opens up a world of in-depth ...

Federated Search for Amazon S3 | Key Use Cases to Streamline Compliance Workflows

Modern business operations are supported by data compliance. As regulations evolve, organizations must ...