Migrating scenarios
IBM Cloud Hyper Protect Virtual Servers for VPC is based on the IBM Secure Execution for Linux technology and provided as a compute option in IBM Virtual Private Cloud (VPC). By migrating your existing workload to IBM Cloud Hyper Protect Virtual Server instances on VPC, you can make full use of scalable isolation of workloads with hardware-based Secure Execution technoglogy and the flexibility of the VPC architecture. For more information, see Why migrate?.
Depending on how you provision and deploy your workloads in the IBM Cloud Hyper Protect Virtual Server instance, you can refer to the following migration guide in order to deploy the same workloads to the Hyper Protect Virtual Server instance in VPC.
In the following context, the classic instance
refers to the current IBM Cloud Hyper Protect Virtual Server instance that has your workloads, and the VPC instance
refers to the IBM Cloud Hyper Protect Virtual Server instance
on VPC that you will migrate to.
- The classic instance provisioned by using an IBM-provided image
- The classic instance provisioned by using your own image that does not need data migration
- The classic instance provisioned by using your own image that has database workloads
- The classic instance provisioned by using your own image that requires data migration and support data export or import feature
The classic instance provisioned by using an IBM-provided image
You provisioned a classic instance by using an IBM-provided image, use the /data
folder on the instance to store any data, and you also have the private and public SSH key pair that are used to log in to the instance.
To migrate all your data and workloads to the VPC instance, complete the following steps:
-
Ensure the migration is done in a maintenance window. For more information, see IBM Cloud® Service Description.
-
Prepare a Ubuntu 20.04 container image by using the
docker pull ubuntu:20.04
command, and create the dockerfile for your workloads based on the Ubuntun 20.04 image.FROM ubuntu:20.04 ...
-
Create the contract for your workloads. The contract is required to deploy the workloads to the VPC instance. For more information, see About the contract.
- Pass the SSH public key that is used to deploy the workloads on the classic instance either in the
env
section of the contract or in thedockerfile
of the container image.- In the
env
section of the contract:env: | type: env ... env: "public-key" : "your public key" ...
- In the
dockerfile
:... ENV SSH_PUBLIC_KEY="ssh-rsa AAAA...Yro6PloQ..." ...
- In the
- Create the
workload
section for the workloads accordingly, and note the path of the data volume. For example,/mnt/data
.
- Pass the SSH public key that is used to deploy the workloads on the classic instance either in the
-
Create the VPC instance by using the container image and its contract. For more information, see Deploying a sample application on Hyper Protect Virtual Server for VPC. Note that the VPC instance must be accessible by using its floating IP and the same SSH private key that you used to access the classic instance.
-
Log in to the classic instance, and install the
rsync
on the instance.apt-get update || apt-get install rsync
-
Run the
rsync
command on the classic instance to copy all the files and folders from the classic instance to the VPC instance. Note that theSSH_ENDPOINT
is the port for SSH connection to your workloads on the VPC instance.rsync -avz -e 'ssh -p <SSH_ENDPOINT> -i ./private_key.pem' /data root@<VPC_INSTANCE_FLOATING_IP>:/mnt/data
-
Stop the workloads on the classic instance, and restart the VPC instance to check whether the workloads are running properly, and all your data exists.
-
If everything works on the VPC instance, you can delete the classic instance.
The classic instance provisioned by using your own image that does need data migration
You provisioned a classic instance by using your own image and registartion file, and store the data on a persistent storage such as a block storage service.
To migrate the workloads to the VPC instance, complete the following steps:
-
Create the contract for your workload. For more information, see About the contract.
- The registraton file must be provided as part of the
workload
section in the contract. - The other environment variables must be provided as part of the
env
section in the contract.
- The registraton file must be provided as part of the
-
Create the VPC instance by using your workload and contract. For more information, see Deploying a sample application on Hyper Protect Virtual Server for VPC.
-
Check your workloads on the VPC instance are running properly, and all your data exists.
-
If everything works on the VPC instance, you can delete the classic instance.
The classic instance provisioned by using your own image that has database workloads
You provisioned a classic instance by using your own image and registartion file, and your own image contains database workloads. Also, you have no SSH access to the classic instance.
If your database workloads support live migration, you can create the VPC instance as described in the previous scenario, and then use the live migration for the migration.
If your database workloads do not support live migration, complete the following steps to migrate the workloads to the VPC instance.
-
Update the classic instance to support
rsync
utility and connect to the VPC environment.-
Reserve a floating IP address in the VPC environment, and specify a port such as
873
for thersync
utility to connect to. For more information, see Creating network interfaces with floating IP addresses and Updating a VPC's default security group rules. -
Use the container image that the classic instance is provisioned as the parent image, update its dockerfile to install the
rsync
utility when the new container starts.FROM ImageName ... RUN apt-get install rsync ...
-
Update the
rsync.conf
file with the reserved floating IP address and the password file that is used by thersync
utility.... Uid = root gid = root use chroot = yes max connections = 1 timeout = 300 pid file = /data/rsyncd.pid lock file = /data/rsync.lock log file = /data/rsyncd.log read only = false list = false hosts allow = <floating_ip> hosts deny = 0.0.0.0/32 auth users = rsync_backup secrets file = /data/rsync.password ...
-
Create a repo registration file for the new container image by allowlisting the floating IP address and the password file. For more information, see Creating a registration definition file by using the CLI.
-
Update the classic instance with the new container image and its repo registration file. For more information, see Updating a virtual server.
-
-
Create a VPC instance by using the updated container image and migrate the data to persistent storage such as a block storage service.
-
Use the container image that the classic instance is provisioned as the parent image, update its dockerfile to install and execute the
rsync
client when the new container starts. Note that the$PASSWD
is the same password used by thersync
utility in the classic instance, and$CLASSIC_INSTANCE_IP
is the IP address of the classic instance.FROM ImageName ... RUN apt-get install update && \ apt-get install rsync -y && \ echo "$PASSWD" > /data/rsync_passwd && \ chmod 600 /data/rsync_passwd && \ rsync -avz -P rsync_backup@$CLASSIC_INSTANCE_IP::backup /data --password-file=/data/rsync_passwd ...
-
Create the contract for the new container image. For more information, see About the contract.
- Pass the IP address of the classic instance and the password key file by the
rsync
utility in theenv
section of the contract.env: | type: env ... env: "CLASSIC_INSTANCE_IP" : "CLASSIC_INSTANCE_IP" "PASSWD" : "<rsync_password> ...
- Pass the registraton file as part of the
workload
section in the contract.
- Pass the IP address of the classic instance and the password key file by the
-
Create the VPC instance by using the new container image and its contract, attach a block storage as its data disk, and assign the floating IP address to the VPC instance.
-
Check the logs of the VPC instance to ensure the data synchronation is completed successfully.
-
Delete this VPC instance without deleting the data disk.
-
-
Create another VPC instance with the container image that is used to provision the classic instance, and attach the data disk to the VPC instance.
-
Create the contract for the container image. For more information, see About the contract.
- The repo registration file of the container image must be passed into the
workload
section of the contract. - Other environment variables required for the container image must be passed into the
env
section of the contract.
- The repo registration file of the container image must be passed into the
-
Create the VPC instance by using the container image and its contract.
-
Attach the data disk to the VPC instance.
-
-
Check your workloads on the VPC instance are running properly, and all your data exists.
-
If everything works on the VPC instance, you can delete the classic instance.
The classic instance provisioned by using your own image that requires data migration and support data export or import feature
You provisioned a classic instance by using your own image and registartion file, and you can access to the classic instance in a secure manner such as ssh
. Meanwhile, you can update your workloads to support data export and import
features when the instance is running.
The export and import features must be supported via REST APIs on the classic or VPC instance, therefore, the source code of your container image must be updated and built to support such features. If a secret is required during the build time, the secret must be recognized by such APIs.
The following diagram shows the workflow and tasks of each role in this scenario.
To migrate all your data and workloads to the VPC instance, complete the following steps:
-
Update the workloads on the classic instance to export the data to be migrated.
-
Update the source code of the container image that is used to provision the classic image, and add the export and import REST APIs with the following requirements.
- Export REST API:
- The API can recognize the secrets to be using for running the workloads.
- The API can generate an archive
.tar
file to include all the data and files with the path information. - The API can encrypt the
tar
file by using the secrets during the container image's build time and at the workloads run time.
- Import REST API:
- The API can recognize the secrets to be using for running the workloads and building the container image.
- The API can decrypt an archive
tar
file by using the secrets during the container image's build time and at the workloads run time. - The API can extract the decrypted data or files to respective paths.
- Export REST API:
-
Update the classic instance with the new container image and its repo registration file. For more information, see Updating a virtual server.
-
Invoke the export API or feature to generate the
.tar
file, and save the file to the local filesystem.
-
-
Create a VPC instance by using the updated container image, attach a persistent storage as its data volume, and import the data.
-
Create the contract for the updated container image that supports import and export features. The
workload
section andenv
section might be from different roles depending how you design the development and deployment process in the VPC environment. For more information, see About the contract. -
Deploy the updated ontainer image with its contract to the VPC instance. Note that you need to attach a data disk such as block storage to the VPC instance, and you can access the workloads on the VPC instance once it's started.
-
Invoke the import API or feature to upload the
.tar
file, and extract the data and files to the respective paths on the data disk. -
Delete this VPC instance without deleting the data disk.
-
-
Create another VPC instance with the container image that is used to provision the classic instance, and attach the data disk to the VPC instance.
-
Create the contract for the container image. For more information, see About the contract.
- The repo registration file of the container image must be passed into the
workload
section of the contract. - Other environment variables required for the container image must be passed into the
env
section of the contract.
- The repo registration file of the container image must be passed into the
-
Create the VPC instance by using the container image and its contract.
-
Attach the data disk to the VPC instance.
-
-
Check your workloads on the VPC instance are running properly, and all your data exists.
-
If everything works on the VPC instance, you can delete the classic instance.