Customizing ALB routing
Modify the default settings for ALBs that run the Kubernetes Ingress image.
Sometimes, you can customize routing for Ingress by adding Kubernetes NGINX annotations (nginx.ingress.kubernetes.io/<annotation>
).
Kubernetes NGINX annotations are always applied to all service paths in the resource, and you can't specify service names within the annotations. Custom IBM Cloud Kubernetes Service annotations (ingress.bluemix.net/<annotation>
)
are not supported.
Kubernetes Ingress Controllers (ALBs) on clusters created on or after 31 January 2022 do not process Ingress resources that have snippet annotations (for example, nginx.ingress.kubernetes.io/configuration-snippet
) by default as all
new clusters are deployed with allow-snippet-annotations: "false"
configuration in the ALB's ConfigMap. If you add any configuration snippets recommended here, you need to edit the ALB's ConfigMap (kube-system/ibm-k8s-controller-config
)
and change allow-snippet-annotations: "false"
to allow-snippet-annotations: "true"
.
Adding a server port to a host header
To add a server port to the client request before the request is forwarded to your back-end app, configure a proxy to external services in a server snippet annotation or as an ibm-k8s-controller-config
ConfigMap field.
Routing incoming requests with a private ALB
To route incoming requests to your apps with a private ALB, specify the private-iks-k8s-nginx
class annotation in the Ingress resource. Private ALBs are configured
to use resources with this class.
kubernetes.io/ingress.class: "private-iks-k8s-nginx"
Authenticating apps with App ID
Configure Ingress with IBM Cloud App ID to enforce authentication for your apps by changing specific Kubernetes Ingress fields. See Adding App ID authentication to apps for more information.
Setting the maximum client request body size
To set the maximum size of the body that the client can send as part of a request, use the following Kubernetes Ingress resource annotation.
nginx.ingress.kubernetes.io/proxy-body-size: 8m
Enabling and disabling client response data buffering
You can disable or enable the storage of response data on the ALB while the data is sent to the client. This setting is disabled by default. To enable, set the following Ingress resource annotation.
nginx.ingress.kubernetes.io/proxy-buffering: "on"
Customizing connect and read timeouts
To set the amount of time that the ALB waits to connect to and read from the back-end app before the back-end app is considered unavailable, use the following annotations.
nginx.ingress.kubernetes.io/proxy-connect-timeout: 62
nginx.ingress.kubernetes.io/proxy-read-timeout: 62
Customizing error actions
To indicate custom actions that the ALB can take for specific HTTP errors, set the custom-http-errors
field.
Changing the default HTTP and HTTPS ports
To change the default ports for HTTP (port 80) and HTTPS (port 443) network traffic, modify each ALB service with the following Kubernetes Ingress ibm-ingress-deploy-config
ConfigMap fields.
Example field setting.
httpPort=8080
httpsPort=8443
Customizing the request header
To add header information to a client request before forwarding the request to your back-end app, use the following Kubernetes ibm-k8s-controller-config
configmap field
proxy-set-headers: "ingress-nginx/custom-headers"
For the custom-headers
ConfigMap requirements, see this example.
Customizing the response header
To add header information to a client response before sending it to the client, use the following annotation.
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "Request-Id: $req_id";
Adding path definitions to external services
To add path definitions to external services, such as services hosted in IBM Cloud, configure a proxy to external services in a location snippet. Or, replace the proxy with a permanent redirect to external services.
Redirecting insecure requests
By default, insecure HTTP client requests redirect to HTTPS. To disable this setting, use the following field and annotation.
ibm-k8s-controller-config
ConfigMap fieldssl-redirect: "false"
- Ingress resource annotation:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
Enabling and disabling HTTP Strict Transport Security
Set the browser to access the domain only by using HTTPS. This option is enabled by default.
- To add max age and subdomain granularity, see this NGINX blog.
- To disable, set the
ibm-k8s-controller-config
configmap field.hsts: false
Setting a maximum number of keepalive requests
To set the maximum number of requests that can be served through one keepalive connection, use the following Kubernetes ibm-k8s-controller-config
configmap field.
keep-alive-requests: 100
The default value for keep-alive-requests
in Kubernetes Ingress is 100
, which is much less than the default value of 4096
in IBM Cloud Kubernetes Service Ingress. If you migrated your Ingress setup from
IBM Cloud Kubernetes Service Ingress to Kubernetes Ingress, you might need to change keep-alive-requests
to pass existing performance tests.
Setting a maximum keepalive request timeout
To set the maximum time that a keepalive connection stays open between the client and the ALB proxy server, use the following Kubernetes ibm-k8s-controller-config
configmap field.
keep-alive: 60
Setting a maximum number of large client header buffers
To set the maximum number and size of buffers that read large client request headers, use the following Kubernetes ibm-k8s-controller-config
configmap field.
large-client-header-buffers: 4 8k
Modifying how the ALB matches the request URI
To modify the way the ALB matches the request URI against the app path, use the following Kubernetes Ingress resource annotation.
nginx.ingress.kubernetes.io/use-regex: "true"
For more info, see this blog.
Adding custom location block configurations
To add a custom location block configuration for a service, use the following Kubernetes Ingress resource annotation.
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "Request-Id: $req_id";
Configuring mutual authentication
To configure mutual authentication for the ALB, use the following Kubernetes Ingress resource annotations. Note that mutual authentication can't be applied to custom ports and must be applied to the HTTPS port.
nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
nginx.ingress.kubernetes.io/auth-tls-secret: "default/ca-secret"
nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1"
nginx.ingress.kubernetes.io/auth-tls-error-page: "http://www.mysite.com/error-cert.html"
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
Configuring proxy buffer size
To configure the size of the proxy buffer that reads the first part of the response, use the following Kubernetes Ingress resource annotation.
nginx.ingress.kubernetes.io/proxy-buffer-size: "8k"
Configuring proxy buffer numbers
To configure the number of proxy buffers for the ALB, use the following Kubernetes Ingress resource annotation.
nginx.ingress.kubernetes.io/proxy-buffers-number: "4"
Configuring busy proxy buffer size
To configure the size of proxy buffers that can be busy, use a location snippet. For more info, see the NGINX docs.
Configuring when an ALB can pass a request
To set when the ALB can pass a request to the next upstream server, use the following Kubernetes Ingress fields.
-
Global setting:
ibm-k8s-controller-config
ConfigMap fields:retry-non-idempotent: true proxy-next-upstream: error timeout http_500
-
Per-resource setting: Ingress resource annotations:
nginx.ingress.kubernetes.io/proxy-next-upstream: http_500 nginx.ingress.kubernetes.io/proxy-next-upstream-timeout: 50 nginx.ingress.kubernetes.io/proxy-next-upstream-tries: 3
Rate limiting
To limit the request processing rate and number of connections per a defined key for services, use the Ingress resource annotations for rate limiting.
Removing the response header
You can remove header information that is included in the client response from the back-end end app before the response is sent to the client. Configure the response header removal in a location snippet, or use the proxy_hide_header
field as a configuration snippet in the ibm-k8s-controller-config
ConfigMap.
Rewriting paths
To route incoming network traffic on an ALB domain path to a different path that your back-end app listens on, use the following Kubernetes Ingress resource annotation.
nginx.ingress.kubernetes.io/rewrite-target: /newpath
Customizing server block configurations
To add a custom server block configuration, use the following Kubernetes Ingress resource annotation.
nginx.ingress.kubernetes.io/server-snippet: |
location = /health {
return 200 'Healthy';
add_header Content-Type text/plain;
}
Allowing SSL services support to encrypt traffic
To allow SSL services support to encrypt traffic to your upstream apps that require HTTPS, use the Kubernetes Ingress resource backend protocol annotation and the backend certificate authentication annotations.
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/proxy-ssl-secret: app1-ssl-secret
nginx.ingress.kubernetes.io/proxy-ssl-verify-depth: 5
nginx.ingress.kubernetes.io/proxy-ssl-name: proxy-ssl-name=mydomain.com
nginx.ingress.kubernetes.io/proxy-ssl-verify: true
Accessing apps with non-standard TCP ports
To access an app via a non-standard TCP port, follow these steps.
-
Create a
tcp-services
ConfigMap to specify your TCP port, such as the following example ports. For the requirements of thetcp-services
ConfigMap, see this blog.apiVersion: v1 kind: ConfigMap metadata: name: tcp-services namespace: kube-system data: 9000: "<namespace>/<service>:8080"
-
Create the ConfigMap in the
kube-system
namespace.kubectl apply -f tcp-services.yaml -n kube-system
-
Specify the
tcp-services
ConfigMap as a field in theibm-ingress-deploy-config
ConfigMap."tcpServicesConfig":"kube-system/tcp-services"
-
Modify each ALB service to add the ports.
Setting a maximum number of upstream keepalive requests
To set the maximum number of requests that can be served through one keepalive connection, use the following Kubernetes ibm-k8s-controller-config
ConfigMap field.
upstream-keepalive-requests: 32
Setting the maximum upstream keepalive timeout
To set the maximum time that a keepalive connection stays open between the ALB proxy server and your app's upstream server, use the following Kubernetes ibm-k8s-controller-config
configmap field.
upstream-keepalive-timeout: 32
Customizing the ALB deployment
Customize the deployment for ALBs that run the Kubernetes Ingress image by creating an ibm-ingress-deploy-config
ConfigMap.
-
Get the names of the services that expose each ALB.
-
Classic clusters:
kubectl get svc -n kube-system | grep alb
-
VPC clusters: In the output, look for a service name that is formatted such as
public-crc204dl7w0qf6n6sp7tug
.kubectl get svc -n kube-system | grep LoadBalancer
-
Creating a ConfigMap to customize the Ingress deployment
-
Create a YAML file for an
ibm-ingress-deploy-config
ConfigMap. For each ALB ID, you can specify one or more of the following optional settings. Note that you can specify only the settings that you want to configure, and don't need to specify all the settings.apiVersion: v1 kind: ConfigMap metadata: name: ibm-ingress-deploy-config namespace: kube-system data: <alb1-id>: '{"deepInspect":"<true|false>", "defaultBackendService":"<service_name>", "defaultCertificate":"<namespace>/<secret_name>", "defaultConfig":"<namespace>/<configmap-name>","enableSslPassthrough":"<true|false>", "httpPort":"<port>", "httpsPort":"<port>", "ingressClass":"<class>", "logLevel":<log_level>, "replicas":<number_of_replicas>, "tcpServicesConfig":"<kube-system/tcp-services>", "enableIngressValidation":"<true|false>"}' <alb2-id>: '{"deepInspect":"<true|false>", "defaultBackendService":"<service_name>", "defaultCertificate":"<namespace>/<secret_name>", "enableSslPassthrough":"<true|false>", "httpPort":"<port>", "httpsPort":"<port>", "ingressClass":"<class>","logLevel":<log_level>, "replicas":<number_of_replicas>, "tcpServicesConfig":"<kube-system/tcp-services>", "enableIngressValidation":"<true|false>"}'
deepInspect
- Enable or disable Ingress object security deep inspector. When enabled, ALBs inspect configuration values in Ingress resources before processing. For more information, see the ingress-nginx source code.
- This feature is available for ALB versions 1.2.0 and later and enabled by default.
defaultBackendService
- Specify the name of an optional default service to receive requests when no host is configured or no matching host is found. This service replaces the IBM-provided default service that generates a
404
message. You might use this service to configure custom error pages or for testing connections. defaultCertificate
- A secret for a default TLS certificate to apply to any subdomain that is configured with Ingress ALBs in the format
secret_namespace/secret_name
. To create a secret, you can run the ibmcloud ks ingress secret create command. If a secret for a different TLS certificate is specified in thespec.tls
section of an Ingress resource, and that secret exists in the same namespace as the Ingress resource, then that secret is applied instead of this default secret. defaultConfig
- Specify a default configmap for your ALBs. Enter the location of the configmap you want to use in the format
namespace/configmap-name
. For example,kube-system/ibm-k8s-controller-config
. enableAnnotationValidation
- Enable or disable Ingress object annotation validation. When enabled, ALBs validate annotation values in Ingress resources before processing. For more information, see the ingress-nginx source code.
- This feature is available for ALB versions 1.9.0 and later and enabled by default.
enableSslPassthrough
- Enable SSL passthrough for the ALB. The TLS connection is not terminated and passes through untouched.
httpPort
,httpsPort
- Expose non-default ports for the Ingress ALB by adding the HTTP or HTTPS ports that you want to open.
ingressClass
- If you specified a class other than
public-iks-k8s-nginx
orprivate-iks-k8s-nginx
in your Ingress resource, specify the class. logLevel
- Specify the log level that you want to use. Choose from the following values.
2
: Shows the details by using the**diff**
command to show changes in the configuration inNGINX
.3
: Shows the details about the service, Ingress rule, endpoint changes in JSON format.5
: ConfiguresNGINX
in debug mode.- For more information about logging, see Debug Logging.
replicas
- By default, each ALB has 2 replicas. Scale up your ALB processing capabilities by increasing the number of ALB pods. For more information, see Increasing the number of ALB pod replicas.
tcpServicesConfig
- Specify a ConfigMap and the namespace that the ConfigMap is in, such as
kube-system/tcp-services
, that contains information about accessing your app service through a non-standard TCP port. enableIngressValidation
- Enable the deployment of Ingress validating webhook for this ALB. The webhook validates Ingress resources before being applied on the cluster to prevent invalid configurations. (The ALB will only process Ingress resources that belong
to the Ingress class it exposes.) Default:
"false"
.
-
Create the
ibm-ingress-deploy-config
ConfigMap in your cluster.kubectl create -f ibm-ingress-deploy-config.yaml
-
To pick up the changes, update your ALBs. Note that it might take up to 5 minutes for the changes to be applied to your ALBs.
ibmcloud ks ingress alb update -c <cluster_name_or_ID>
-
If you specified non-standard HTTP, HTTPS, or TCP ports, you must open the ports on each ALB service.
-
For each ALB service that you found in step 1, edit the YAML file.
kubectl edit svc -n kube-system <alb_svc_name>
-
In the
spec.ports
section, add the ports that you want to open. By default, ports 80 and 443 are open. If you want to keep 80 and 443 open, don't remove them from this file. Any port that is not specified is closed. Do not specify anodePort
. After you add the port and apply the changes, anodePort
is automatically assigned... ports: - name: port-80 nodePort: 32632 port: 80 protocol: TCP targetPort: 80 - name: port-443 nodePort: 32293 port: 443 protocol: TCP targetPort: 443 - name: <new_port> port: <port> protocol: TCP targetPort: <port> ...
-
Save and close the file. Your changes are applied automatically.
-
Customizing the Ingress class
An Ingress class associates a class name with an Ingress controller type. Use the IngressClass
resource
to customize Ingress classes.
Adding App ID authentication to apps
Enforce authentication for your apps by configuring Ingress with IBM Cloud App ID.
-
Choose an existing or create a new App ID instance.
An App ID instance can be used in only one namespace in your cluster. If you want to configure App ID for Ingress resources in multiple namespaces, repeat the steps in this section to specify a unique App ID instance for the Ingress resources in each namespace.
- To use an existing instance, ensure that the service instance name contains only lowercase alphanumeric characters and its length does not exceed 25 characters. To change the name, select Rename service from the more options menu on your service instance details page.
- To provision a new App ID instance:
- Replace the Service name with your own unique name for the service instance. The service instance name must contain only lowercase alphanumeric characters and can not be longer than 25 characters.
- Choose the same region that your cluster is deployed in.
- Click Create.
-
Add redirect URLs for your app. A redirect URL is the callback endpoint of your app. To prevent phishing attacks, IBM Cloud App ID validates the request URL against the allowlist of redirect URLs.
- In the App ID management console, navigate to Manage Authentication.
- In the Identity providers tab, make sure that you have an Identity Provider selected. If no Identity Provider is selected, the user will not be authenticated but is issued an access token for anonymous access to the app.
- In the Authentication settings tab, add redirect URLs for your app in the format
https://<hostname>/oauth2-<App_ID_service_instance_name>/callback
. Note that all letters in the service instance name must specified as lowercase.
If you use the IBM Cloud App ID logout function, you must append
/sign_out
to your domain in the formathttps://<hostname>/oauth2-<App_ID_service_instance_name>/sign_out
and include this URL in the redirect URLs list. If you want to use a custom logout page, you must setwhitelist_domains
in the ConfigMap for OAuth2-Proxy. Call thehttps://<hostname>/oauth2-<App_ID_service_instance_name>/sign_out
endpoint with therd
query parameter or set theX-Auth-Request-Redirect
header with your custom logout page. For more details, see Sign out. -
Bind the App ID service instance to your cluster. The command creates a service key for the service instance, or you can include the
--key
option to use existing service key credentials. Be sure to bind the service instance to the same namespace that your Ingress resources exist in. Note that all letters in the service instance name must specified as lowercase.ibmcloud ks cluster service bind --cluster <cluster_name_or_ID> --namespace <namespace> --service <App_ID_service_instance_name> [--key <service_instance_key>]
When the service is successfully bound to your cluster, a cluster secret is created that holds the credentials of your service instance. Example CLI output:
ibmcloud ks cluster service bind --cluster mycluster --namespace mynamespace --service appid1 Binding service instance to namespace... OK Namespace: mynamespace Secret name: binding-<service_instance_name>
-
Enable the ALB OAuth Proxy add-on in your cluster. This add-on creates and manages the following Kubernetes resources: an OAuth2-Proxy deployment for your App ID service instance, a secret that contains the configuration of the OAuth2-Proxy deployment, and an Ingress resource that configures ALBs to route incoming requests to the OAuth2-Proxy deployment for your App ID instance. The name of each of these resources begins with
oauth2-
.- Enable the
alb-oauth-proxy
add-on.ibmcloud ks cluster addon enable alb-oauth-proxy --cluster <cluster_name_or_ID>
- Verify that the ALB OAuth Proxy add-on has a status of
Addon Ready
.ibmcloud ks cluster addon ls --cluster <cluster_name_or_ID>
- Enable the
-
In the Ingress resources for apps where you want to add App ID authentication, make sure that the resource name does not exceed 25 characters in length. Then, add the following annotations to the
metadata.annotations
section.-
Add the following
auth-url
annotation. This annotation specifies the URL of the OAuth2-Proxy for your App ID instance, which acts as the OIDC Relying Party (RP) for App ID. Note that all letters in the service instance name must be specified as lowercase.... annotations: nginx.ingress.kubernetes.io/auth-url: https://oauth2-<App_ID_service_instance_name>.<namespace_of_Ingress_resource>.svc.cluster.local/oauth2-<App_ID_service_instance_name>/auth ...
-
Sometimes the authentication cookie used by
OAuth2-Proxy
exceeds 4 KB. Therefore it is split into two parts. The following snippet must be added to ensure that both cookies could be properly updated byOAuth2-Proxy
.... annotations: nginx.ingress.kubernetes.io/configuration-snippet: | auth_request_set $_oauth2_<App_ID_service_instance_name>_upstream_1 $upstream_cookie__oauth2_<App_ID_service_instance_name>_1; access_by_lua_block { if ngx.var._oauth2_<App_ID_service_instance_name>_upstream_1 ~= "" then ngx.header["Set-Cookie"] = "_oauth2_<App_ID_service_instance_name>_1=" .. ngx.var._oauth2_<App_ID_service_instance_name>_upstream_1 .. ngx.var.auth_cookie:match("(; .*)") end } ...
-
Choose which tokens to send in the
Authorization
header to your app. For more information about ID and access tokens, see the App ID documentation.-
To send only the
ID Token
, add the following annotation:... annotations: nginx.ingress.kubernetes.io/auth-response-headers: Authorization ...
-
To send only the
Access Token
, add the following information to theconfiguration-snippet
annotation. (This extends the snippet from Step 5.2.)... annotations: nginx.ingress.kubernetes.io/configuration-snippet: | auth_request_set $_oauth2_<App_ID_service_instance_name>_upstream_1 $upstream_cookie__oauth2_<App_ID_service_instance_name>_1; auth_request_set $access_token $upstream_http_x_auth_request_access_token; access_by_lua_block { if ngx.var._oauth2_<App_ID_service_instance_name>_upstream_1 ~= "" then ngx.header["Set-Cookie"] = "_oauth2_<App_ID_service_instance_name>_1=" .. ngx.var._oauth2_<App_ID_service_instance_name>_upstream_1 .. ngx.var.auth_cookie:match("(; .*)") end if ngx.var.access_token ~= "" then ngx.req.set_header("Authorization", "Bearer " .. ngx.var.access_token) end } ...
-
To send the
Access Token
and theID Token
, add the following information to theconfiguration-snippet
annotation. (This extends the snippet from Step 5.2.)... annotations: nginx.ingress.kubernetes.io/configuration-snippet: | auth_request_set $_oauth2_<App_ID_service_instance_name>_upstream_1 $upstream_cookie__oauth2_<App_ID_service_instance_name>_1; auth_request_set $access_token $upstream_http_x_auth_request_access_token; auth_request_set $id_token $upstream_http_authorization; access_by_lua_block { if ngx.var._oauth2_<App_ID_service_instance_name>_upstream_1 ~= "" then ngx.header["Set-Cookie"] = "_oauth2_<App_ID_service_instance_name>_1=" .. ngx.var._oauth2_<App_ID_service_instance_name>_upstream_1 .. ngx.var.auth_cookie:match("(; .*)") end if ngx.var.id_token ~= "" and ngx.var.access_token ~= "" then ngx.req.set_header("Authorization", "Bearer " .. ngx.var.access_token .. " " .. ngx.var.id_token:match("%s*Bearer%s*(.*)")) end } ...
-
-
Optional: If your app supports the web app strategy in addition to or instead of the API strategy, add the
nginx.ingress.kubernetes.io/auth-signin: https://$host/oauth2-<App_ID_service_instance_name>/start?rd=$escaped_request_uri
annotation. Note that all letters in the service instance name must be in lowercase.- If you specify this annotation, and the authentication for a client fails, the client is redirected to the URL of the OAuth2-Proxy for your App ID instance. This OAuth2-Proxy, which acts as the OIDC Relying Party (RP) for App ID, redirects the client to your App ID login page for authentication.
- If you don't specify this annotation, a client must authenticate with a valid bearer token. If the authentication for a client fails, the client's request is rejected with a
401 Unauthorized
error message.
-
-
Re-apply your Ingress resources to enforce App ID authentication. After an Ingress resource with the appropriate annotations is re-applied, the ALB OAuth Proxy add-on deploys an OAuth2-Proxy deployment, creates a service for the deployment, and creates a separate Ingress resource to configure routing for the OAuth2-Proxy deployment messages. Do not delete these add-on resources.
kubectl apply -f <app_ingress_resource>.yaml -n namespace
-
Verify that App ID authentication is enforced for your apps.
- If your apps supports the web app strategy: Access your app's URL in a web browser. If App ID is correctly applied, you are redirected to an App ID authentication log-in page.
- If your apps supports the API strategy: Specify your
Bearer
access token in the Authorization header of requests to the apps. To get your access token, see the App ID documentation. If App ID is correctly applied, the request is successfully authenticated and is routed to your app. If you send requests to your apps without an access token in the Authorization header, or if the access token is not accepted by App ID, then the request is rejected.
-
Optional: You can customize the default behavior of the OAuth2-Proxy by creating a Kubernetes ConfigMap.
- Create a ConfigMap YAML file that specifies values for the OAuth2-Proxy settings that you want to change.
apiVersion: v1 kind: ConfigMap metadata: name: oauth2-<App_ID_service_instance_name> namespace: <ingress_resource_namespace> data: auth_logging: <true|false> # Log all authentication attempts. auth_logging_format: # Format for authentication logs. For more info, see https://oauth2-proxy.github.io/oauth2-proxy/configuration/overview#logging-configuration cookie_csrf_expire: "15m" # Expiration timeframe for CSRF cookie. Default is "15m". cookie_csrf_per_request: <true|false> # Enable multiple CSRF cookies per request, making it possible to have parallel requests. Default is "false". cookie_domains: # A list of optional domains to force cookies to. The longest domain that matches the request’s host is used. If there is no match for the request’s host, the shortest domain is used. Example: sub.domain.com,example.com cookie_expire: "168h0m0s" # Expiration timeframe for cookies. Default: "168h0m0s". cookie_samesite: "" # SameSite attribute for cookies. Supported values: "lax", "strict", "none", or "". email_domains: "" # Authenticate IDs that use the specified email domain. To authenticate IDs that use any email domain, use "*". Default: "". Example: example.com,example2.com pass_access_token: <true|false> # Pass the OAuth access token to the backend app via the X-Forwarded-Access-Token header. request_logging: <true|false> # Log all requests to the backend app. request_logging_format: # Format for request logs. For more info, see https://oauth2-proxy.github.io/oauth2-proxy/configuration/overview#request-log-format scope: # Scope of the OAuth authentication. For more info, see https://oauth.net/2/scope/ set_authorization_header: <true|false> # Set the Authorization Bearer response header when the app responds to the Ingress ALB, such when using the NGINX auth_request mode. set_xauthrequest: <true|false> # Set X-Auth-Request-User, X-Auth-Request-Email, and X-Auth-Request-Preferred-Username response headers when the app responds to the Ingress ALB, such as when using the NGINX auth_request mode. standard_logging: <true|false> # Log standard runtime information. standard_logging_format: # Format for standard logs. For more info, see https://oauth2-proxy.github.io/oauth2-proxy/configuration/overview#standard-log-format tls_secret_name: # The name of a secret that contains the server-side TLS certificate and key to enable TLS between the OAuth2-Proxy and the Ingress ALB. By default, the TLS secret defined in your Ingress resources is used. whitelist_domains: # Allowed domains for redirection after authentication. Default: "". Example: example.com,*.example2.com For more info, see: https://oauth2-proxy.github.io/oauth2-proxy/configuration/overview#command-line-options oidc_extra_audiences: # Additional audiences which are allowed to pass verification. cookie_refresh: # Refresh the cookie after this duration. Example: "15m". To use this feature, you must enable "Refresh token" for the AppID instance. For more info, see: /docs/appid?topic=appid-managing-idp&interface=ui#idp-token-lifetime
- Apply the ConfigMap resource to your add-on. Your changes are applied automatically.
kubectl apply -f oauth2-<App_ID_service_instance_name>.yaml
- Create a ConfigMap YAML file that specifies values for the OAuth2-Proxy settings that you want to change.
For the list of changes for each ALB OAuth Proxy add-on version, see the IBM Cloud ALB OAuth Proxy add-on change log.
Upgrading ALB OAuth Proxy add-on
To upgrade the ALB OAuth Proxy add-on, you must first disable the add-on, then re-enable the add-on and specify the desired version.
The upgrade process is non-interruptive as the supervised OAuth2 Proxy instances remain on the cluster even when the add-on is disabled.
- Disable the add-on.
ibmcloud ks cluster addon disable alb-oauth-proxy --cluster <cluster_name_or_ID>
- List the available add-on versions and decide which version you want to use.
ibmcloud ks cluster addon versions --addon alb-oauth-proxy
- Enable the add-on and specify the
--version
option. If you don't specify a version, the default version is enabled.ibmcloud ks cluster addon enable alb-oauth-proxy --cluster <cluster_name_or_ID> [--version <version>]
Preserving the source IP address
By default, the source IP addresses of client requests are not preserved by the Ingress ALB. To preserve source IP addresses, you can enable the PROXY protocol in VPC clusters or change the externalTrafficPolicy
in classic clusters.
Enabling the PROXY protocol in VPC clusters
To preserve the source IP address of the client request in a VPC cluster, you can enable the NGINX PROXY protocol for all load balancers that expose Ingress ALBs in your cluster.
The PROXY protocol enables load balancers to pass client connection information that is contained in headers on the client request, including the client IP address, the proxy server IP address, and both port numbers, to ALBs.
-
Enable the PROXY protocol. For more information about this command's parameters, see the CLI reference. After you run this command, new load balancers are created with the updated PROXY protocol configuration. Two unused IP addresses for each load balancer must be available in each subnet during the load balancer recreation. After these load balancers are created, the existing ALB load balancers are deleted. This load balancer recreation process might cause service disruptions.
ibmcloud ks ingress lb proxy-protocol enable --cluster <cluster_name_or_ID> --cidr <subnet_CIDR> --header-timeout <timeout>
-
Confirm that the PROXY protocol is enabled for the load balancers that expose ALBs in your cluster.
ibmcloud ks ingress lb get --cluster <cluster_name_or_ID>
-
To later disable the PROXY protocol, you can run the following command:
ibmcloud ks ingress lb proxy-protocol disable --cluster <cluster_name_or_ID>
Changing the externalTrafficPolicy
in classic clusters
Preserve the source IP address for client requests in a classic cluster.
In Classic clusters, increasing the ALB replica count to more than 2 increases the number of replicas, but when the externalTrafficPolicy
is configured
as Local
, then any replicas more than 2 are not used. Only 2 load balancer pods are present on the cluster (in an active-passive setup) and because of this traffic policy, only forward the incoming traffic to the ALB pod on
the same node.
By default, the source IP address of the client request is not preserved. When a client request to your app is sent to your cluster, the request is routed to a pod for the load balancer service that exposes the ALB. If no app pod exists on the same worker node as the load balancer service pod, the load balancer forwards the request to an app pod on a different worker node. The source IP address of the package is changed to the public IP address of the worker node where the app pod runs.
To preserve the original source IP address of the client request, you can enable source IP preservation. Preserving the client’s IP is useful, for example, when app servers have to apply security and access-control policies.
When source IP preservation is enabled, load balancers shift from forwarding traffic to an ALB pod on a different worker node to an ALB pod on the same worker node. Your apps might experience downtime during this shift. If you disable an ALB, any source IP changes you make to the load balancer service that exposes the ALB are lost. When you re-enable the ALB, you must enable source IP again.
To enable source IP preservation, edit the load balancer service that exposes an Ingress ALB:
-
Enable source IP preservation for a single ALB or for all the ALBs in your cluster.
-
To set up source IP preservation for a single ALB:
-
Get the ID of the ALB for which you want to enable source IP. The ALB services have a format similar to
public-cr18e61e63c6e94b658596ca93d087eed9-alb1
for a public ALB orprivate-cr18e61e63c6e94b658596ca93d087eed9-alb1
for a private ALB.kubectl get svc -n kube-system | grep alb
-
Open the YAML for the load balancer service that exposes the ALB.
kubectl edit svc <ALB_ID> -n kube-system
-
Under
spec
, change the value ofexternalTrafficPolicy
fromCluster
toLocal
. -
Save and close the configuration file. The output is similar to the following:
service "public-cr18e61e63c6e94b658596ca93d087eed9-alb1" edited
-
-
To set up source IP preservation for all public ALBs in your cluster, run the following command:
kubectl get svc -n kube-system | grep alb | awk '{print $1}' | grep "^public" | while read alb; do kubectl patch svc $alb -n kube-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'; done
Example output
"public-cr18e61e63c6e94b658596ca93d087eed9-alb1", "public-cr17e61e63c6e94b658596ca92d087eed9-alb2" patched
-
To set up source IP preservation for all private ALBs in your cluster, run the following command:
kubectl get svc -n kube-system | grep alb | awk '{print $1}' | grep "^private" | while read alb; do kubectl patch svc $alb -n kube-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'; done
Example output
"private-cr18e61e63c6e94b658596ca93d087eed9-alb1", "private-cr17e61e63c6e94b658596ca92d087eed9-alb2" patched
-
-
Verify that the source IP is being preserved in your ALB pods logs.
- Get the ID of a pod for the ALB that you modified.
kubectl get pods -n kube-system | grep alb
- Open the logs for that ALB pod. Verify that the IP address for the
client
field is the client request IP address instead of the load balancer service IP address.kubectl logs <ALB_pod_ID> nginx-ingress -n kube-system
- Get the ID of a pod for the ALB that you modified.
-
Now, when you look up the headers for the requests that are sent to your back-end app, you can see the client IP address in the
x-forwarded-for
header. -
If you no longer want to preserve the source IP, you can revert the changes that you made to the service.
- To revert source IP preservation for your public ALBs:
kubectl get svc -n kube-system | grep alb | awk '{print $1}' | grep "^public" | while read alb; do kubectl patch svc $alb -n kube-system -p '{"spec":{"externalTrafficPolicy":"Cluster"}}'; done
- To revert source IP preservation for your private ALBs:
kubectl get svc -n kube-system | grep alb | awk '{print $1}' | grep "^private" | while read alb; do kubectl patch svc $alb -n kube-system -p '{"spec":{"externalTrafficPolicy":"Cluster"}}'; done
- To revert source IP preservation for your public ALBs:
Configuring SSL protocols and SSL ciphers at the HTTP level
Enable SSL protocols and ciphers at the global HTTP level by editing the ibm-k8s-controller-config
ConfigMap.
For example, if you still have legacy clients that require TLS 1.0 or 1.1 support, you must manually enable these TLS versions to override the default setting of TLS 1.2 and TLS 1.3 only.
When you specify the enabled protocols for all hosts, the TLSv1.1 and TLSv1.2 parameters (1.1.13, 1.0.12) work only when OpenSSL 1.0.1 or higher is used. The TLSv1.3 parameter (1.13.0) works only when OpenSSL 1.1.1 built with TLSv1.3 support is used.
To edit the ConfigMap to enable SSL protocols and ciphers:
-
Edit the configuration file for the
ibm-k8s-controller-config
ConfigMap resource.kubectl edit cm ibm-k8s-controller-config -n kube-system
-
Add the SSL protocols and ciphers. Format ciphers according to the OpenSSL library cipher list format.
apiVersion: v1 data: ssl-protocols: "TLSv1 TLSv1.1 TLSv1.2 TLSv1.3" ssl-ciphers: "HIGH:!aNULL:!MD5:!CAMELLIA:!AESCCM:!ECDH+CHACHA20" kind: ConfigMap metadata: name: ibm-k8s-controller-config namespace: kube-system
-
Save the configuration file.
-
Verify that the ConfigMap changes were applied. The changes are applied to your ALBs automatically.
kubectl get cm ibm-k8s-controller-config -n kube-system -o yaml
Sending your custom certificate to legacy clients
If you have legacy devices that don't support Server Name Indication (SNI) and you use a custom TLS certificate in your Ingress resources, you must edit the ALB's server settings to use your custom TLS certificate and custom TLS secret.
When you create a classic cluster, a Let's Encrypt certificate is generated for the default Ingress secret that IBM provides. If you create a custom secret in your cluster and specify this custom secret for TLS termination in your Ingress resources, the Ingress ALB sends the certificate for your custom secret to the client instead of the default Let's Encrypt certificate. However, if a client does not support SNI, the Ingress ALB defaults to the Let's Encrypt certificate because the default secret is listed in the ALB's default server settings. To send your custom certificate to devices that don't support SNI, complete the following steps to change the ALB's default server settings to your custom secret.
The Let's Encrypt certificates that are generated by default aren't intended for production usage. For production workloads, bring your own custom certificate.
-
Edit the
alb-default-server
Ingress resource.kubectl edit ingress alb-default-server -n kube-system
-
In the
spec.tls
section, change the value of thehosts.secretName
setting to the name of your custom secret that contains your custom certificate. Example:spec: rules: ... tls: - hosts: - invalid.mycluster-<hash>-0000.us-south.containers.appdomain.cloud secretName: <custom_secret_name>
-
Save the resource file.
-
Verify that the resource now points to your custom secret name. The changes are applied to your ALBs automatically.
kubectl get ingress alb-default-server -n kube-system -o yaml
Tuning ALB performance
To optimize performance of your Ingress ALBs, you can change the default settings according to your needs.
Enabling log buffering and flush timeout
By default, the Ingress ALB logs each request as it arrives. If you have an environment that is heavily used, logging each request as it arrives can greatly increase disk I/O utilization. To avoid continuous disk I/O, you can enable log buffering
and flush timeout for the ALB by editing the ibm-k8s-controller-config
Ingress ConfigMap. When buffering is enabled, instead of performing a separate write operation for each log entry, the ALB buffers a series of entries and
writes them to the file together in a single operation.
-
Edit the
ibm-k8s-controller-config
ConfigMap.kubectl edit cm ibm-k8s-controller-config -n kube-system
-
Set the threshold for when the ALB should write buffered contents to the log.
- Buffer size: Add the
buffer
field and set it to how much log memory can be held in the buffer before the ALB writes the buffered contents to the log file. For example, if the default value of100KB
is used, the ALB writes buffer contents to the log file every time the buffer reaches 100KB of log content. - Time interval: Add the
flush
field and set it to how often the ALB should write to the log file. For example, if the default value of5m
is used, the ALB writes buffer contents to the log file once every 5 minutes. - Time interval or buffer size: When both
flush
andbuffer
are set, the ALB writes buffer content to the log file based on whichever threshold parameter is met first.
apiVersion: v1 kind: ConfigMap data: access-log-params: "buffer=100KB, flush=5m" metadata: name: ibm-k8s-controller-config ...
- Buffer size: Add the
-
Save and close the configuration file. The changes are applied to your ALBs automatically.
-
Verify that the logs for an ALB now contain buffered content that is written according to the memory size or time interval you set.
kubectl logs -n kube-system <ALB_ID> -c nginx-ingress
Changing the number or duration of keepalive connections
Keepalive connections can have a major impact on performance by reducing the CPU and network usage that is needed to open and close connections. To optimize the performance of your ALBs, you can change the maximum number of keepalive connections between the ALB and the client and how long the keepalive connections can last.
-
Edit the
ibm-k8s-controller-config
ConfigMap.kubectl edit cm ibm-k8s-controller-config -n kube-system
-
Change the values of
keep-alive-requests
andkeep-alive
.keep-alive-requests
: The number of keepalive client connections that can stay open to the Ingress ALB. The default is100
.keep-alive
: The timeout, in seconds, during which the keepalive client connection stays open to the Ingress ALB. The default is75
.
apiVersion: v1 data: keep-alive-requests: 100 keep-alive: 75 kind: ConfigMap metadata: name: ibm-k8s-controller-config ...
-
Save and close the configuration file. The changes are applied to your ALBs automatically.
-
Verify that the ConfigMap changes were applied.
kubectl get cm ibm-k8s-controller-config -n kube-system -o yaml
Changing the number of simultaneous connections or worker processes
Change the default setting for how many simultaneous connections the NGINX worker processes for one ALB can handle or how many worker processes can occur for one ALB.
Each ALB has NGINX worker processes that process the client connections and communicate with the upstream servers for the apps that the ALB exposes. By changing the number of worker processes per ALB or how many connections the worker processes
can handle, you can manage the maximum number of clients that an ALB can handle. Calculate your maximum client connections with the following formula: maximum clients = worker_processes * worker_connections
.
- The
max-worker-connections
field sets the maximum number of simultaneous connections that can be handled by the NGINX worker processes for one ALB. The default value is16384
. Note that themax-worker-connections
parameter includes all connections that the ALB proxies, not just connections with clients. Additionally, the actual number of simultaneous connections can't exceed the limit on the maximum number of open files, which is set by themax-worker-open-files
parameter. If you set the value ofmax-worker-connections
to0
, the value formax-worker-open-files
is used instead. - The
worker-processes
field sets the maximum number of NGINX worker processes for one ALB. The default value is"auto"
, which indicates that the number of worker processes matches the number of cores on the worker node where the ALB is deployed. You can change this value to a number if your worker processes must perform high levels of I/0 operations.
-
Edit the
ibm-k8s-controller-config
ConfigMap.kubectl edit cm ibm-k8s-controller-config -n kube-system
-
Change the value of
max-worker-connections
orworker-processes
.apiVersion: v1 data: max-worker-connections: 16384 worker-processes: "auto" kind: ConfigMap metadata: name: ibm-k8s-controller-config ...
-
Save the configuration file. The changes are applied to your ALBs automatically.
-
Verify that the ConfigMap changes were applied.
kubectl get cm ibm-k8s-controller-config -n kube-system -o yaml
Changing the number of open files for worker processes
Change the default maximum for the number of files that can be opened by each worker node process for an ALB.
Each ALB has NGINX worker processes that process the client connections and communicate with the upstream servers for the apps that the ALB exposes. If your worker processes are hitting the maximum number of files that can be opened, you might
see a Too many open files
error in your NGINX logs. By default, the max-worker-open-files
parameter is set to 0
, which indicates that the value from the following formula is used: system limit of maximum open files / worker-processes - 1024
.
If you change the value to another integer, the formula no longer applies.
-
Edit the
ibm-k8s-controller-config
ConfigMap.kubectl edit cm ibm-k8s-controller-config -n kube-system
-
Change the value of
max-worker-open-files
.apiVersion: v1 data: max-worker-open-files: 0 kind: ConfigMap metadata: name: ibm-k8s-controller-config ...
-
Save the configuration file. The changes are applied to your ALBs automatically.
-
Verify that the ConfigMap changes were applied.
kubectl get cm ibm-k8s-controller-config -n kube-system -o yaml
Tuning kernel performance
To optimize performance of your Ingress ALBs, you can also change the Linux kernel sysctl
parameters on worker nodes. Worker nodes are automatically provisioned with optimized
kernel tuning, so change these settings only if you have specific performance optimization requirements.