Understanding data portability for Red Hat OpenShift on IBM Cloud
Data PortabilityThe ability of IT services to recover from rare but major incidents and non-transient, wide-scale failures, such as service disruption that affects an entire geographical area. The impact of such an incident exceeds the ability of the high availability design to handle it. See also high availability, recovery time objective, recovery point objective. involves a set of tools and procedures that enable customers to export the digital artifacts that are needed to implement similar workload and data processing on different service providers or on-premises software. It includes procedures for copying and storing the service customer content, including the related configuration that is used by the service to store and process the data, on the customer’s own location.
Responsibilities
IBM Cloud provides interfaces and instructions to guide the customer to copy and store the service customer content, including the related configuration, on their own selected location.
The customer is responsible for the use of the exported data and configuration for data portability to other infrastructures, which includes:
- The planning and execution for setting up alternative infrastructure on different cloud providers or on-premises software that provide similar capabilities to the IBM services.
- The planning and execution for the porting of the required application code on the alternative infrastructure, including the adaptation of customer's application code, deployment automation, and so on.
- The conversion of the exported data and configuration to the format that's required by the alternative infrastructure and adapted applications.
For more information, see Your responsibilities with Red Hat OpenShift on IBM Cloud.
Data export procedures
Red Hat OpenShift on IBM Cloud provides mechanisms to export your content that was uploaded, stored, and processed using the service.
Exporting data by using the oc
CLI
You can use the oc
CLI to export and save the resources from your cluster. For more information, see the Kubernetes documentation.
Example oc get
commands.
oc get pod pod1 -o yaml
Exporting data by using Velero
The following example exports data from Red Hat OpenShift on IBM Cloud to IBM Cloud Object Storage. However, you can adapt these steps to export data to other s3 providers.
-
Create an IBM Cloud Object Storage instance to store Velero resources.
-
Create a COS bucket. Enter a unique name, then select
cross-region
for resiliency andus-geo
for region. -
Create new HMAC credentials with the Manager role.
-
Create a local credentials file for Velero. Enter the HMAC credentials from the prior step.
[default] aws_access_key_id=<HMAC_access_key_id> aws_secret_access_key=<HMAC_secret_access_key>
-
Create an IAM Access Group and assign the Service ID of the COS credentials from Step 3 to Cloud Object Storage. Include Manager and Viewer permissions. This gives Velero access to read and write to the COS bucket that you created.
-
Install Velero on your cluster. If you selected a different region for the COS instance, adjust the command with the appropriate endpoints. By default, this targets all storage in the cluster for backup.
velero install --provider aws --bucket <bucket-name> --secret-file <hmac-credentials-file> --use-volume-snapshots=false --default-volumes-to-fs-backup --use-node-agent --plugins velero/velero-plugin-for-aws:v1.9.0 --image velero/velero:v1.13.0 --backup-location-config region=us-geo,s3ForcePathStyle="true",s3Url=https://s3.direct.us.cloud-object-storage.appdomain.cloud
-
Check the Velero pod status.
kubectl get pods -n velero
-
Create a backup of the cluster. The following command backs up all PVCs, PVs, and pods from the
default
namespace. You can also apply filters to target specific resources or namespaces.velero backup create mybackup --include-resources pvc,pv,pod --default-volumes-to-fs-backup --snapshot-volumes=false --include-namespaces default --exclude-namespaces kube-system,test-namespace
-
Check the backup status.
velero backup describe mybackup
You can now view or download the cluster resources from your IBM Cloud Object Storage bucket.
You can also migrate the clusters resources that you backed up to IBM Cloud Object Storage to another s3 instance and bucket in a different cloud provider.
For more information about restoring Velero snapshots, see Cluster migration.
To see an example scenario that uses velero
in IBM Cloud for migrating from a Classic to VPC cluster, see Migrate Block Storage PVCs from an IBM Cloud Kubernetes Classic cluster to VPC cluster.
Other options for exporting data
Title | Description |
---|---|
Rclone |
Review the Migrating Cloud Object Storage (COS) apps and data between IBM Cloud accounts tutorial to see how to move data that is one COS bucket
to another COS bucket in IBM Cloud or in another cloud provider by using rclone . |
OpenShift APIs for Data Protection (OADP) | OADP (OpenShift APIs for Data Protection) is an operator that Red Hat has created to create backup and restore APIs for OpenShift clusters. For more information, see Backup and restore Red Hat OpenShift cluster applications with OADP and the OADP documentation |
Backing up and restoring apps and data with Portworx Backup | This document walks you through setting up PX Backup. You can configure clusters from other providers and restore data from IBM Cloud to the new provider. |
Wanclouds VPC+ DRaaS (VPC+ Disaster Recovery as a Service) | Review the Wanclouds Multi Cloud Backup, Disaster Recovery and Optimization as a Service. For more information, see the Wanclouds documentation. |
Exported data formats
-
Cluster resources exported via
oc
can be exported in several file types. For more information, see the Output options. -
Cluster resources exported via
velero
are exported in JSON format. For more information, see the Output file format.
Data ownership
All exported data is classified as customer content and is therefore applied to them full customer ownership and licensing rights, as stated in IBM Cloud Service Agreement.