IBM Cloud Docs
VCF for Classic - Automated overview

VCF for Classic - Automated overview

VMware Cloud Foundation for Classic - Automated is a hosted private cloud that delivers the VMware Cloud Foundation for Classic - Flexible stack as a service. The VMware® environment is built in addition to a minimum of three IBM Cloud® bare metal servers and it offers shared network-attached storage and dedicated software-defined storage options. It also includes the automatic deployment and configuration of an easy-to-manage logical edge firewall, which VMware NSX® powers.

In many cases, the entire environment can be provisioned in less than a day and the bare metal infrastructure can rapidly and elastically scale the compute capacity up and down as needed.

After initial instance deployment, you can increase shared storage by ordering more Network File System (NFS) file shares from the IBM Cloud infrastructure customer portal. You can attach them manually to all VMware ESXi™ servers in a cluster. You can also take advantage of VMware vSAN™ as a storage option. To increase the vSAN-based storage capacity of a vSAN cluster, you can add more ESXi servers post-deployment.

VCF for Classic - Automated architecture

The following graphic depicts the high-level architecture and components of a three node VCF for Classic - Automated deployment.

VCF for Classic - Automated architecture
Figure 1. VCF for Classic - Automated high-level architecture for a three-node cluster

For VCF for Classic - Automated with NSX-V instances, if you purchased IBM-provided VMware licensing, you can upgrade the VMware NSX Base edition to Advanced or to Enterprise edition. Also, you can request more VMware components, such as VMware Aria® Operations™. You can also add IBM-Managed Services if you want to offload the day-to-day operations and maintenance of the virtualization, guest OS, or application layers. The IBM Cloud Professional Services team is available to help you accelerate your journey to the cloud with migration, implementation, planning, and onboarding services.

Physical infrastructure

This layer provides the physical infrastructure (compute, storage, and network resources) to be used by the virtual infrastructure.

Virtualization infrastructure (Compute and Network)

This layer virtualizes the physical infrastructure through different VMware products:

  • VMware vSphere virtualizes the physical compute resources.
  • VMware NSX is the network virtualization platform that provides logical networking components and virtual networks.

Virtualization management

This layer consists of the following components:

  • vCenter Server Appliance with embedded Platform Services Controller (PSC).
  • For NSX-T - three NSX Manager or Controller nodes (total of three nodes).
  • For NSX-V - one NSX Manager and three VMware NSX Controller™ nodes (total of four nodes).
  • VMware NSX Edge™ clusters - two.
  • IBM CloudDriver virtual server instance (VSI). The CloudDriver VSI is deployed on demand as needed for certain operations such as adding hosts to the environment.

The base offering is deployed with a vCenter Server appliance that is sized to support an environment with up to 400 hosts and up to 4,000 VMs. The same vSphere API-compatible tools and scripts can be used to manage the IBM-hosted VMware environment.

In total, the base offering has the following requirements, which are reserved for the virtualization management layer.

  • For NSX-T, 42 vCPU and 128 GB vRAM
  • For NSX-V, 38 vCPU and 67 GB vRAM

The remaining host capacity for your virtual machines (VMs) depends on several factors, such as oversubscription rate, VM sizing, and workload performance requirements.

For more information about the architecture, see Overview of VMware Solutions.

Technical specifications for VCF for Classic - Automated instances

The availability and pricing of standardized hardware configurations might vary based on the IBM Cloud data center that is selected for deployment.

The following components are included in your VCF for Classic - Automated instance.

Bare metal server

You can order three or more bare metal servers on the consolidated or management cluster, and optionally two or more bare metal servers on the workload cluster.

If you plan to use vSAN storage, the configuration requires a minimum of four bare metal servers.

The following configurations are available:

  • Cascade Lake - 4-CPU Intel® Cascade Lake generation servers (Quad Intel Xeon® 6200/8200 series) or 2-CPU Intel Cascade Lake generation servers (Dual Intel Xeon 4200/5200/6200/8200 series) with your selected RAM size.
  • SAP-certified Cascade Lake - 2-CPU Intel Cascade Lake generation servers (Dual Intel Xeon 5200/6200/8200 series).


The following networking components are ordered:

  • 10 Gbps dual public and private network uplinks.

  • Three VLANs (Virtual LANs) - one public and two private.

  • (NSX-T only) One overlay network with a T1 and T0 router for potential east-west communication between local workloads that are connected to layer 2 (L2) networks. This network is deployed as a sample routing topology, which you can modify, build on, or remove.

  • (NSX-V only) One VXLAN (Virtual eXtensible LAN) with DLR (Distributed Logical Router) for potential east-west communication between local workloads that are connected to layer 2 (L2) networks. The VXLAN is deployed as a sample routing topology, which you can modify, build on it, or remove it. You can also add security zones by attaching extra VXLANs to new logical interfaces on the DLR.

  • VMware NSX Edge clusters (two):

    • One secure management service VMware NSX Edge cluster for outbound traffic for add-on services, which is deployed by IBM as part of the management networking typology. This edge cluster is used by add-on services such as Zerto, FortiGate® Virtual Appliance, and F5 BIG-IP® to communicate with external licensing and billing components.

      These edge nodes are named service-edgeNN. Do not modify or customize them. Otherwise, some of your add-on services might stop working."

    • Secure customer-managed edge cluster for your application traffic. The edge cluster is deployed by IBM as a template that can be modified by you to provide VPN access or public access. For more information, see Configuring your network to use the customer-managed NSX edge cluster with your VMs.

    For more information, see Does the customer-managed NSX Edge pose a security risk?

Virtual Server Instances

The following virtual server instances (VSIs) are ordered:

  • A VSI for IBM CloudDriver, which is deployed as needed for initial deployment and for Day 2 operations.
  • Choose to deploy a single Microsoft® Windows® Server VSI for Microsoft Active Directory™ (AD) or two high availability Microsoft Windows VMs on the management cluster to help enhance security and robustness.


During initial deployment, you can choose between NFS and vSAN storage options.

After deployment, you can add NFS storage shares to an existing NFS or vSAN cluster. For more information, see Adding NFS storage to Automated instances.

NFS storage

The NFS option offers customized shared file-level storage for workloads with various options for size and performance:

  • Size - 20 GB to 24 TB

  • Performance - 0.25, 2, 4, or 10 IOPS/GB. The 10 IOPS/GB performance level is limited to a maximum capacity of 4 TB per file share.

  • Individual configuration of file shares

    (NSX-V only) If you choose the NFS option, one 2 TB and four IOPS/GB file share for management components are ordered.

vSAN storage

The vSAN option offers customized configurations, with various options for disk type, size, and quantity:

  • Disk quantity - 2, 4, 6, 8, or 10

  • Storage disk - 960 GB SSD, 1.9 TB SSD, 3.8 TB SSD, or 7.68 TB SSD. In addition, two cache disks of 960 GB are also ordered per host.

    3.8 TB SSD (solid-state disk) drives are supported when they are made available in a data center.

Technical specifications expansion nodes for VCF for Classic - Automated instances

Each expansion node deploys and incurs charges for the following components in your IBM Cloud account.

Hardware for expansion nodes

One bare metal server with the configuration presented in Technical specifications for Automated instances.

Technical specifications for multizone instances

This information is provided as reference for existing multizone instances. New deployments of multizone instances are not supported.

The VCF for Classic - Automated multizone architecture is an end-to-end reference architecture that provides automated failover for customer workloads. It uses an IBM Cloud multizone regionA region that is spread across data centers in multiple zones to increase fault tolerance. with an IBM-managed service that covers the following components:

  • Compute architecture (VMware vSphere®)
  • Network architecture (NSX-T™)
  • Storage architecture (VMware vSAN or NFS)
  • Integration with IBM Services Platform with Watson to enable the consumption of services
  • Tools for monitoring, troubleshooting, performance, and capacity management.
    • VMware Aria Suite pattern (VMware Aria Operations, VMware Aria Operations™ for Logs, and VMware Aria Operations™ for Networks)
    • Active Directory pattern
    • Integration with IBM Netcool and IBM Bluecare for auto-ticketing, alerting, and event enrichment
    • Resiliency patterns (backup and recovery)

Multizone instances are available in the following regions:

  • America - Washington DC, Dallas, Sao Paulo, and Toronto
  • Europe - London and Frankfurt
  • Asia-Pacific - Sydney, Tokyo, and Osaka

Base infrastructure architecture specifications

The base infrastructure has the following specifications:

  • Each site has its own dedicated gateway and management cluster.
  • The resource cluster is a vSphere + vSAN stretched cluster.
  • The witness site contains two VMware ESXi™ hosts that provide quorum for both vSAN and vCenter.
  • Single vCenter Server and NSX Manager architecture.
  • vCenter Server Appliance with embedded Platform Services Controller (PSC) that uses vCenter Server High Availability (HA) over an L3 network architecture.
  • NSX Manager recovery is using a Hot Standby method that syncs up backup files.

Tools and technology architecture specifications

The tools and technology architecture has the following specifications:

  • VMware Aria Operations, VMware Aria Operations for Logs, and VMware Aria Operations for Networks to provide operations and management capabilities specific to the VMware products that are used, for example NSX, vSAN, and vSphere.
  • IBM Software Defined Environment (SDE) automation tool health check for validating deployments against best practices and security policies.
  • Optional Disaster Recovery (DR) to an out of Region IBM Cloud site.
  • FortiGate Security Appliance or similar to secure any internet access and to facilitate active-active network integration with the on-premises network.

vSphere + vSAN stretched cluster architecture specifications

The vSphere + vSAN stretched cluster architecture has the following specifications:

  • Provides storage and compute capabilities, which span two sites for enhanced availability.
  • Write requests from VMs are synchronously written to both sites, which incur site-to-site network latency.
  • Read requests from VMs are fulfilled locally to the physical location of where the VM is located, thus avoiding extra latency.
  • The witness site and witness host act as the split brain or quorum.
  • vSAN native encryption (for at rest encryption) can be used in combination with this architecture.

Network architecture specifications

The network architecture has the following specifications:

  • Edge/DLR/VXLANs in combination with BGP metric-based routing to facilitate an active-active site design with automated failover.
  • Each site has the concept of their own set of Edges, DLRs, and VXLANs.
  • Under normal circumstances, any VMs connected to DLR-A, for example VM-A, are in IBM Cloud availability zone #1 and traffic is both ingress and egress locally.
  • During a vMotion activity for VM-A, traffic still ingresses and egresses through the IBM Cloud availability zone #1.
  • During a site or edge failure, traffic routes out of the remaining available site.