Understanding Virtual Network Interfaces in VPC clusters

Virtual Private Cloud 4.20 and later Bare metal worker nodes only

Virtual Network Interfaces (VNIs) provide advanced network connectivity options for workloads running on Red Hat OpenShift on IBM Cloud VPC clusters with bare metal worker nodes.

What are Virtual Network Interfaces?

A Virtual Network Interface (VNI) is an IBM Cloud VPC abstraction that represents individual network connections. VNIs embed properties of a network connection, including:

  • IP addresses (primary and secondary)
  • MAC addresses
  • VPC subnet association
  • Security group membership

VNIs are available only on clusters with bare metal worker nodes running Red Hat CoreOS (RHCOS).

VNI architecture in OpenShift clusters

Static VNIs

When you create a bare metal-based Red Hat OpenShift on IBM Cloud cluster or add a new bare metal worker pool in IBM Cloud VPC, two VNIs are automatically created and attached to every bare metal worker node:

Primary VNI
Handles regular worker traffic including pod network, overlay User Defined Networks (UDNs), and communication with the cluster master.
Secondary VNI (carrier)
Acts as a carrier for dynamic VNI attachments that you manage. This VNI enables you to attach additional network interfaces to workloads running on the worker node.

Dynamic VNIs

Dynamic VNIs are created and managed on-demand after cluster creation. These VNIs can be:

  • Attached to specific worker nodes
  • Configured to float between workers in the same zone
  • Used for direct VPC network connectivity

Key capabilities

Direct VPC connectivity
VNIs enable workloads to connect directly to VPC networks, bypassing the pod network overlay. This provides native VPC networking features like security groups, network ACLs, and routing.
Security group integration
VNIs can be associated with VPC security groups, providing fine-grained network access control at the interface level.
Zone-specific networking
VNIs are attached to a specific VPC subnet and zone. They can handle traffic only for workloads running on cluster workers in the same zone where the VNI is provisioned.
Network preservation during migration
For workloads that support live migration (such as OpenShift Virtualization VMs), VNIs can implicitly float and follow the workload between bare metal worker instances within the same zone, preserving network connections.

Use cases

OpenShift Virtualization
Enable direct VPC network connectivity for virtual machines, allowing VMs to have their own VPC IP addresses. See Managing virtual network interfaces for OpenShift Virtualization.
Multi-network workloads
Connect workloads to multiple VPC subnets simultaneously, enabling complex network topologies and traffic segregation.
Legacy application migration
Provide VM-like networking for containerized applications that require specific IP addresses or direct network access.
Network function virtualization
Deploy network functions that require direct access to VPC networking features.

Limitations and considerations

Static VNI modifications
Do not modify the static VNIs that are automatically created for each worker node. While these VNIs are visible in your VPC account, any changes to their settings are not supported. This includes attaching floating IPs, changing security groups, or modifying any other VNI properties. Modifying static VNIs can cause cluster connectivity issues.
VNI modification restrictions for floating attachments
You cannot modify VNI properties for floating (cluster-scoped) dynamic attachments. This includes changes to the VNI name, Floating IP addresses, Infrastructure NAT settings, and security group assignments. To update these settings, you must first detach the VNI, make your changes, and then reattach it to the cluster. This limitation is temporary.
Zone constraints
VNIs cannot float between zones. In multi-zone clusters, VNIs can only handle traffic for workloads in the same zone where the VNI is provisioned.
Bare metal requirement
VNIs are only supported on bare metal worker nodes. Virtual server instance (VSI) worker nodes do not support VNIs.
RHCOS requirement
Worker nodes must run Red Hat CoreOS (RHCOS) operating system.
OVN-Kubernetes CNI
Clusters must use the OVN-Kubernetes Container Network Interface (CNI) plugin.
Cross-account management
In Red Hat OpenShift on IBM Cloud, worker nodes are provisioned in IBM's account, not your account. This means VNI lifecycle management differs from standalone VPC bare metal instances, and you have different visibility into VNI attachments.
Version requirement
VNI support requires OpenShift 4.20 or later.

Prerequisites

Before using VNIs in your cluster, ensure you have:

  • A Red Hat OpenShift on IBM Cloud cluster at version 4.20 or later
  • Bare metal worker nodes in your cluster
  • Red Hat CoreOS (RHCOS) operating system on worker nodes
  • OVN-Kubernetes CNI configured
  • VPC infrastructure with appropriate subnets
  • The Operator platform access role for Kubernetes Service in IBM Cloud IAM
  • The Editor or Administrator platform access role for VPC Infrastructure Services in IBM Cloud IAM

Managing VNIs with the CLI

You can use the Red Hat OpenShift on IBM Cloud CLI to attach, list, and detach VNIs from your cluster worker nodes.

Attaching a VNI to a worker node

To attach a VNI to a specific bare metal worker node, use the vni attach baremetal command. You must specify a VLAN ID (range: 1-500) that matches your network configuration.

ibmcloud ks vni attach baremetal --worker WORKER_ID --vni VNI_ID --vlan VLAN_ID [--auto-delete]

To attach a floating VNI that can follow workloads between workers in the same zone, specify the cluster ID instead of a worker ID:

ibmcloud ks vni attach baremetal --cluster-id CLUSTER_ID --vni VNI_ID --vlan VLAN_ID [--auto-delete]

The --auto-delete flag automatically deletes the VNI when it is removed from the cluster.

Listing VNI attachments

To list all VNIs attached to a cluster:

ibmcloud ks vni ls --cluster-id CLUSTER_ID

To list VNIs attached to a specific worker node:

ibmcloud ks vni ls --worker WORKER_ID

Detaching a VNI

To detach a VNI from a worker node, specify both the VNI ID and the worker ID:

ibmcloud ks vni detach --worker WORKER_ID --vni VNI_ID

For floating VNIs, first list the VNIs to find the current worker ID, then detach using that worker ID.

For more detailed examples and use cases, see Managing virtual network interfaces for OpenShift Virtualization.

Next steps