Known issues with landing zone deployable architectures
For a list of common known issues see:
- Known issues with Red Hat OpenShift on IBM Cloud / IBM Cloud Kubernetes Service when using terraform
OpenShift VPC cluster deployed by landing zone in warning state
If you deployed an OpenShift VPC cluster using a landing zone deployable architecture version less than 7.2.2 and you notice the cluster has gone into a Warning state, it might be due to a known issue where the virtual private endpoint
(VPE) for Cloud Object Storage that is created by the deployable architecture is conflicting with the ones automatically created by VPC clusters which results in worker nodes not being able to reach the Cloud Object Storage direct endpoint.
To confirm this:
- Run the command
ibmcloud ks cluster get --cluster <cluster_name_or_id> - Confirm that the
Statusis:Some Cluster Operators are down-level and need to be updated, see 'https://ibm.biz/rhos_clusterversion_ts' - Follow the steps in the link which will ask you to run the command
oc get clusterversionon your cluster. - If you have hit the issue due to the conflicting virtual private endpoints for Cloud Object Storage, you will see something following status:
$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 14h Unable to apply 4.16.28: the cluster operator image-registry is not available
Workaround
Upgrade to version 7.2.2 or later where you will see the expected destroy of the landing zone created virtual private endpoint for Cloud Object Storage and its associated reserved IP.
Unsupported attribute error after interrupted apply or destroy
If a Terraform apply or destroy operation is interrupted, you might see an "Unsupported attribute" error at the next Terraform operation. Typically, this error occurs when a destroy operation
is cancelled or has an unexpected error.
Error: Unsupported attribute
on .terraform/modules/landing_zone/dynamic_values/config_modules/vsi/vsi.tf line 20, in module "vsi_subnets":
20: subnet_zone_list = var.vpc_modules[each.value.vpc_name].subnet_zone_list
├────────────────
│ each.value.vpc_name is "management"
│ var.vpc_modules is object with 3 attributes
This object does not have an attribute named "subnet_zone_list".
The error occurs because the interrupted operation leaves key resources in a partial deployment state and other resources are configured to use these missing components.
Workaround
To complete the destroy operation that is stuck in this state, try to decouple the resources that depend on the missing components. To decouple the resources, use the override feature, for example by setting the override_json_string input variable.
- VSI on VPC landing zone deployable architectures that are missing their VPC components:
override_json_string='{"vsi": []}' - Red Hat OpenShift Container Platform on VPC landing zone deployable architectures:
override_json_string='{"clusters": []}'
Changing prefix value recreates subnets and address prefixes
If you deployed Landing Zone VPC infrastructure and later update the prefix input value, Terraform might plan to destroy and recreate subnet and VPC address prefix resources.
This behavior occurs because Landing Zone VPC module use the prefix value as part of the internal Terraform resource keys. When the prefix changes, Terraform treats the existing resources as different objects and plans replacement
instead of renaming them in place.
You might notice output similar to:
resource "ibm_is_vpc_address_prefix" will be destroyed
(because key is not in for_each map)
Impact
Subnets and address prefixes can be recreated.
Dependent resources might also be replaced depending on configuration.
This affects all solutions that consume the Landing Zone VPC module.
Workaround
Avoid changing the prefix value after the infrastructure is deployed. Treat the prefix as an immutable identifier for an existing environment when possible.
If you change the prefix after deployment, Terraform can destroy and recreate address prefixes, subnets, and dependent resources. Proceed only if you are prepared for this disruptive update.
Limitation: cluster-autoscaler add-on not supported in Landing Zone module
If you deploy the OpenShift Cluster using the Landing Zone module, the configuration of cluster-autoscaler add-on is not supported.
To set the cluster‑autoscaler add-on config, we need to set the cluster-context by initializing the Kubernetes provider. Since the number of clusters is dynamic, and provider initialization can’t be dynamic. Due to this limitation,
this set up isn’t supported.