Reassigning primary clusters for VCF for Classic - Automated instances
End of Marketing: As of 31 October 2025, new deployments of VMware Solutions offerings are no longer available for new customers. Existing customers can still use and expand their active VMware® workloads on IBM Cloud®. For more information, see End of Marketing for VMware on IBM Cloud.
You can reassign a primary cluster to another cluster in your VMware Cloud Foundation for Classic - Automated instance according to your business needs.
Before you reassign a primary cluster
Review the following information before you reassign your primary cluster:
- Ensure that VMware NSX® is upgraded to the most recent version (4.1.2 or later).
- You must migrate the management virtual machines (VMs), deploy the NSX edge management VMs on the new cluster, and migrate the Usage Meter VM (if you deployed VMware vCloud Usage Meter).
- Do not migrate all VMs at the same time, as this action might cause failures.
The VM migration procedures are slightly different depending if any add-on services are installed on your instance.
| Add-on service | Migration procedures |
|---|---|
| Juniper® vSRX |
Complete the following procedures: |
| Caveonix RiskForesight™ |
Complete the following procedures: |
| Other services or no services |
Complete the following procedures: |
Procedure to migrate management VMs (vSRX)
If the Juniper vSRX service is installed on your instance, complete the following procedure.
-
In the VMware vSphere® Web Client, create host groups on the target cluster:
- Click the target cluster.
- On the Configure tab, under Configuration, click VM/Host Groups.
- Under VM/Host Groups, click ADD.
- Under Add Group Member, select the type as Host Group and click ADD.
- Select the host that you want to add as a member. Provide the same name as the hostname's fully qualified domain name (FQDN).
- Complete the previous 2 steps for the second hostname.
-
Disable affinity rules on the source cluster:
- Click the source cluster.
- On the Configure tab, under Configuration, click VM/Host Rules.
- For each of the following items:
<vm_nickname>_vSRX_edge_node1,<vm_nickname>_vSRX_edge_node2, and<vm_nickname>-affinity-rule, click Edit, and clear the Enable Rule checkbox.
-
Export the network configurations of the source cluster:
- Go to the Networks tab of the source cluster.
- For each of the following switches:
<instance_name>-<source_clustername>-private,<vsrx_nickname>-vsrx-fab DVS, and<vsrx_nickname>-vsrx-private-transit, right-click and click Export Configuration. - Locate
<instance_name>--public, right-click<vsrx_nickname>-vsrx-public-transitand click Export Configuration.
-
Import the port group configurations into the target cluster:
- Go to the Networks tab of the target cluster.
- Right-click the switch
<instance_name>-<target_cluster_name>and click Distributed Port Group > Import Distributed Port Group. - Under Import port group configuration, click Browse and upload the corresponding configuration file that you exported in Step 3. Click Next. Under Ready to complete, verify the information and click Finish.
- Complete the previous 2 steps for
<instance_name>-<source_cluster_name>-public.
-
Re-create the resource pool:
- On the source vSRX cluster, right-click its resource pool and click Edit resource Settings. Write down the reservation values for CPU and memory.
- On the target vSRX cluster, right-click New Resource Pool and create a resource pool with the same name as the source vSRX resource pool. Set the reservation values for CPU and memory to the same values as the source.
-
Migrate the vSRX edge nodes VMs:
- Right-click the
<vm_nickname>_vSRX_edge_node1VM and click Actions > Migrate. - Under Select a migration type, choose Change both compute resource and storage.
- Under Select a compute resource, choose the target cluster of the vSRX resource pool and click Next.
- Under Select storage, choose the vSAN datastore name for the target cluster.
- Under Select networks, choose the destination network by browsing to the appropriate DVS switch and click Next.
- Under Select vMotion priority, choose a priority option.
- Under Ready to complete, verify the information and click Finish to start the migration.
- Repeat the previous 7 steps for the
<vm_nickname>_vSRX_edge_node2VM.
- Right-click the
-
Create VM groups on the target cluster:
- Click the target cluster.
- On the Configure tab, under Configuration, click VM/Host Groups.
- Under VM/Host Groups, click ADD.
- Under Add Group Member, select the type as VM Group and click ADD.
- Select the vSRX node1 VM to add as member. Provide the same name as the vSRX node1.
- Complete the previous step for the vSRX node2 VM.
-
Map nodes to hosts on the target cluster:
- Click the target cluster.
- On the Configure tab, under Configuration, click VM/Host Rules.
- Under VM/Host Rules, click ADD.
- Under Create VM/Host Rule, select the type as Virtual Machines to Hosts and map
<vm_nickname>_vSRX_edge_node1to the host where it is deployed. Assign the same name<vm_nickname>_vSRX_edge_node1. - Complete the previous step for the
<vm_nickname>_vSRX_edge_node2node.
-
Enable the affinity rule on the target cluster:
- Click the target cluster.
- On the Configure tab, under Configuration, click VM/Host Rules.
- Search for and locate
<vm_nickname>-affinity-rule. - Click Edit and select Enable Rule.
If you are migrating from a source vSAN OSA (Original Storage Architecture) to a target vSAN ESA (Express Storage Architecture) cluster, also configure the failover settings:
- Click the target cluster.
- Click the Networks tab, select the
vsrxnickname-vsrx-fabDVS switch, and click the Configure tab. - Under Policies, click Edit and go to Teaming and failover.
- Drag
uplink1anduplink2to move them under Unused uplinks andlag1to move it under Active uplinks. - Repeat the previous 3 steps for the
vsrxnickname-vsrx-private-transitandvsrxnickname-vsrx-public-transitDVS switches.
Procedure to create corresponding port groups (RiskForesight)
If the Caveonix RiskForesight service is installed on your instance, complete the following procedure.
RiskForesight is installed with its own port group named SDDC-DPortGroup-Caveonix. Before you migrate the management VMs for instances with this service deployed, you must create a corresponding port group in the new (target) cluster.
- In the vSphere Web Client, click the Networks tab, right-click SDDC-DPortGroup-Caveonix, and click Export Configuration.
- Under Export Configuration, click OK.
- Right-click the private network subnet of the target cluster and click Distributed Port Group > Import Distributed Port Group.
- Under Import port group configuration, click Browse and upload the exported configuration archive file that you created in Step 2. Click Next.
- Under Ready to complete, verify the information and click Finish. Confirm that the subnet is to be renamed.
- Complete the procedure to migrate management VMs.
Procedure to migrate management VMs
-
In the vSphere Web Client, go to the Virtual Machines tab, select the VM that you want to migrate and click Actions > Migrate.
-
Under Select a migration type, choose Change both compute resource and storage.
-
Under Select a compute resource, choose a target cluster or host in the target cluster.
If VMware HCX is installed on your instance, you might get a compatibility error, which you can ignore. For more information, see vCenter vMotion error "Virtual Ethernet Card 'Network Adapter 1' is not supported".
-
Under Select storage, choose the vSAN datastore name for the target cluster.
If the target cluster has a higher version for the virtual switch than of the current cluster, you receive an error. For more information, see Migrating a virtual machine between two different vDS versions.
-
Under Select networks, ensure that the
dpgsource networks are equivalent to their target cluster. Do not change the edge-teps, as added clusters cannot create their own networks. Also, the destination network that is marked as none is a known issue and can be ignored. -
Under Select vMotion priority, choose a priority option.
-
Under Ready to complete, verify the information and click Finish to start the migration.
Procedure to deploy the NSX edge management VMs on the new cluster
After you migrate the management VMs, you must deploy the NSX edge management VMs on the new (target) cluster.
Review the following information before you redeploy:
- Ensure that a new subnet is available on the same VLAN as the IP addresses assigned to the original (source) NSX edge.
- Obtain the DNS and NTP server details from IBM Support.
Complete the following steps in NSX Manager:
-
Click the System tab and from the left navigation menu, click Fabric > Nodes > Edge Transport Nodes and click Add Edge Node.
-
Under Name and Description, enter a name for the new (target) edge, the hostname in FQDN format, and select the Medium option for Form Factor. Click NEXT.
-
Under Credentials, enter the passwords for both the
adminandrootusers and toggle the Allow SSH Login and Allow Root SSH Login switches for debugging purposes. You can skip the Audit Credentials section. Click NEXT. -
Under Configure Deployment, select the Compute Manager, Cluster, and Datastore values from the list. Cluster and Datastore refer to the new (target) cluster. Click NEXT.
-
Under Configure Node Settings, select IPv4 Only for Management IP Assignment and Static for Type. Enter the Management IP and Default Gateway values and click Select Interface.
-
Under Select Interface, choose the portgroup with the name that ends with
dpg-mgmtand click SAVE. -
Under Configure Node Settings, enter the DNS and NTP server details, previously obtained from IBM Support. Click NEXT.
-
Under Configure NSX, add the following NVDS (NSX Virtual Distributed Switch) switches with the indicated settings:
-
For the
edge-tepsswitch:- Transport Zone -
tz-vm-overlay - Uplink Profile -
edge-tep-profile - IP Address Type (TEP) -
IPv4 - IPv4 Assignment (TEP) -
Use IP Pool - IPv4 Pool - Choose the pool with the name that matches the cluster to which you are deploying the new edge nodes.
- Click Select Interface and choose VLAN Segment for Type, then select
edge-tepsand click SAVE.
- Transport Zone -
-
For the
edge-privateswitch:- Transport Zone -
edge-private - Uplink Profile -
edge-private-profile - Click Select Interface for
uplink2and choose Virtual Switch/Distributed Virtual Portgroup for Type, then selectdpg-edge-uplinkand click SAVE.
- Transport Zone -
-
For the
edge-publicswitch:- Transport Zone -
edge-public - Uplink Profile -
edge-public-profile - Click Select Interface for
uplink2and choose Virtual Switch/Distributed Virtual Portgroup for Type, then selectdpg-externaland click SAVE.
- Transport Zone -
-
-
To complete the configuration and start the deployment of the edge node, click FINISH.
-
Repeat the previous steps to create an additional NSX edge node for replacing the original (source) node. By default, your configuration has a services edge and a customer edge, but it might include other edges that you deployed.
-
After all new edge nodes are deployed, replace the old nodes with new nodes. For each node pair:
- Click the System tab and from the left navigation menu, click Fabric > Nodes > Edge Clusters.
- Under Actions, click Replace Edge Cluster Member.
- Under Replace, select the old edge node and under With, select the newly deployed edge node.
- Click Save.
-
Verify that the newly added nodes are correctly configured:
- Go to Networking > Tier-0 Gateways or Networking > Tier-1 Gateways and to the cluster name where the new nodes were added.
- Click the active/standby configuration to confirm that the newly added nodes are listed in both the Active and Standby states.
-
After you confirm that all new nodes are working correctly, delete the original (source) nodes:
- Click the System tab and from the left navigation menu, click Fabric > Nodes > Edge Transport Nodes.
- Select each of the original nodes and delete them one by one. This step also deletes the old nodes from the vCenter Server cluster.
Procedure to reassign primary clusters for Automated instances
-
In the VMware Solutions console, click Resources > VCF for Classic from the left navigation panel.
-
In the VMware Cloud Foundation for Classic table, click the instance that contains the primary cluster that you want to reassign.
-
Click the Infrastructure tab and click Reassign primary cluster on the upper right of the Clusters table.
-
On the Reassign primary cluster pane, the original primary cluster is preselected. Verify that both the new primary cluster and the original primary cluster show a status of Available.
-
Select the new cluster that you want to assign as the primary cluster. The list shows only the new clusters that meet the following conditions, which are required for a successful reassignment:
- Must have vSAN storage.
- Must be on the same VLANs as the original primary cluster.
- Must have the same version of vSphere and the same networking type as the original primary cluster.
-
Click Reassign.
The reassignment of the new primary cluster can take up to one hour.
Results after you reassign the primary cluster
The status of the original and new primary clusters changes from Available to Modifying. After the reassignment is complete, the primary tag is shown on the new primary cluster and the status of both clusters changes to Available.