IBM Cloud Docs
Troubleshooting for DevSecOps

Troubleshooting for DevSecOps

Use these tips to help troubleshoot problems that you might run into when you use DevSecOps.

General troubleshooting methods

  • Reload the page in case the UI is slow or the logs fail to load.

  • Check for outages on the status page

  • Run the pipeline again.

    Run the pipeline again
    Manual Promotion Trigger

IBM environment issues

Pipeline runs are slow due to Git rate limiting

Pipeline execution looks slower, pipeline runs take longer to execute, complete.

Also, the following entry can be found in various places of the logs:

Unable to use this tool because the git API rate limit is exceeded. Please try again in <n> minutes.

Pipelines internally use Git API requests (set Git statuses, create / update issues, ...). There is a Git rate limit of API requests, per Git token, per hour. When this limit is about to be reached, there is an internal pipeline mechanism that will pause requests - hence pausing the pipeline run as well - in order to prevent the pipeline run to early abort. This might result in long running pipelines.

To overcome this Git rate limiting issue:

  1. Ensure environment property batched-evidence-collection to 1. See corresponding section in the IBM Cloud documentation.
  2. Migrate from Git evidence locker to COS evidence locker (also known as COS only). See corresponding section in the IBM Cloud documentation.
  3. Use different Git tokens for Pipelines and/or triggers.

The check-registry step of the containerized task fails with error

Storage quota error
Storage quota error

IBM Cloud registry offers a limited quota, pushing too many images it can be exceeded.

  1. Go to images and delete the images that are not required.
  2. Re-run the pipeline.

You can check your quota limits and usage by using the following command:

ibmcloud cr quota

Logs do not show for step

Logs do not show
Logs do not show

This is an issue with the Tekton environment.

Try reloading the page. Download the logs by using the download button.

Download logs
Download logs

Template and pipeline issues

Task is cancelled because the base image cannot be accessed

Base image is not accessed
Base image is not accessed

Check whether your artifactory credentials are correct. A new artifactory token can be created here. You can create a secret manually by running:

kubectl create secret docker-registry mysecret \
--dry-run \
--docker-server=wcp-compliance-automation-team-docker-local.artifactory.swg-devops.com  \
--docker-username=<username> \
--docker-password=<artifactory token> \
--docker-email=<email> \
-o yaml

It outputs something similar to the following:

apiVersion: v1
data:
  .dockerconfigjson: <your secret>
kind: Secret
metadata:
  creationTimestamp: null
  name: regcred
type: kubernetes.io/dockerconfigjson

In the pipeline properties, update the artifactory-dockerconfigjson parameter with the .dockerconfigjson value.

Update artifactory-dockerconfigjson
Update artifactory-dockerconfigjson

For more information, check out the kubectl documentation on creating a secret(: external).

Pipeline fails early

When a pipeline has early fails with the following message:

Pipeline could not run, resource failed to apply - Kind: "Secret", Name: "pipeline-pull-secret" ResourceError

In this case, the failure occured in the pipeline because it did not boot. Therefore, no logs are available.

The dockerconfig.json secret, which pulls the Docker images from IBM Container Registry that is used by this pipeline is not correct.

This secret might be incorrect, or the API key that is associated to this secret is rotated or revoked.

Generate a new dockerconfig.json then use this new secret value in your pipeline (either as a pipeline parameter, or stored in Secrets Manager).

To generate a new dockerconfig.json, run the following command:

kubectl create secret docker-registry my-registry-secret \
 -o json \
 --dry-run=client \
 --docker-server=icr.io \
 --docker-username=iamapikey \
 --docker-email=john-doe@ibm.com \
 --docker-password=<apikey> \
  | jq -r '.data[".dockerconfigjson"]'

Where <apikey> is your IBM Cloud Cloud API key or a Service ID API key.

Pipeline cannot pull images from multiple artifactory repositories

The pipeline was successful in pulling images from one repository but not from another repository.

The pipeline failed because it is configured to pull images from a single repository.

Manually create a new artifactory dockerconfigjson secret to support authentication against multiple repositories.

To support authentication for pulling images from multiple repositories in Artifactory, generate a new dockerconfigjson and add a secret type artifactory-dockerconfigjson environment property to the one or more pipelines.

The following script is a sample to generate an artifactory dockerconfigjson that provides authentication details for two different artifactory repositories. This is customizable script.

Prerequisites

The kubectl and jq commands must be installed.

Steps

  1. Open a text editor that ssaves file in LF (Line Feed) character mode line endings.

  2. Create a file and copy the contents of the following script:

    dockerconfig_1=$(kubectl create secret docker-registry my-registry-secret \
    --output json \
    --dry-run=client \
    --docker-server="<artifactory_repo_host>" \
    --docker-username="<email>" \
    --docker-email="<email>" \
    --docker-password="<artifactory_token>" \
    | jq -r '.data[".dockerconfigjson"]')
    
    dockerconfig_2=$(kubectl create secret docker-registry my-registry-secret \
    --output json \
    --dry-run=client \
    --docker-server="<second_repo_host>" \
    --docker-username="<email>" \
    --docker-email="<email>" \
    --docker-password="<second_artifactory_token>" \
    | jq -r '.data[".dockerconfigjson"]')
    
    echo $dockerconfig_1 | base64 -d > first_secret.json
    echo $dockerconfig_2 | base64 -d > second_secret.json
    new_dockerconfig=$(jq -s '.[0] * .[1]' first_secret.json second_secret.json | base64 -w0)
    echo ${new_dockerconfig} > final_dockerconfig.txt
    
  3. Replace the placeholder values with the actual authentication details:

    • Replace <artifactory_repo_host> with the link to the first repository.
    • Replace <artifactory_token> with the authentication token for the first repository.
    • Replace <email> with the email associated with the authentication.
    • Replace <second_repo_host> with the link to the second repository.
    • Replace <second_artifactory_token> with the authentication token for the second repository.
  4. Save the file.

  5. Ensure that the file is saved in a directory with write permissions.

  6. Run the script.

  7. Add the contents of final_dockerconfig.txt as a secret in the pipeline environment properties for artifactory-dockerconfigjson. If you are using Secrets Manager or Key Protect, save the contents of this file by using appropriate techniques.

CRA or Docker build fails due to missing submodule files

When a pipeline stage such as CRA or Docker build fails, you might see an error message similar to:

failed to calculate checksum of ref moby::...: failed to walk /var/lib/docker/tmp/buildkit-mount.../common-dev-assets/module-assets/ci: lstat ... no such file or directory

This error occurs because your repository contains Git submodules, but pipelines do not clone submodules by default. Each pipeline stage runs in its own container and performs a fresh checkout of the repository, so submodule contents are missing unless explicitly initialized.

To resolve this, you must ensure the Git submodule is initialized in every stage that requires it. For CRA specifically, you can add the submodule initialization to your custom CRA script.

For example, update your script to include:

git submodule update --init --recursive

This guarantees the submodule is available before the CRA build process runs.

Image signing issues

If your image signing task fails, see the image signing documentation to verify that the signing key was correctly generated and stored.

Issues related to Dynamic Scan Stage not defined in pipeline configuration

The CI Pipeline run fails with an error.

CI Pipeline run failing for Dynamic-Scan stage
CI Pipeline Run failing for Dynamic-Scan stage

The error occurs when the CI pipeline configuration does not contain a task definition to run the dynamic-scan. Add the following snippet in .pipeline-config.yaml and customize the step to suit your application.

   dynamic-scan:
      dind: true
      abort_on_failure: false
      image: icr.io/continuous-delivery/pipeline/pipeline-base-image:2.12@sha256:ff4053b0bca784d6d105fee1d008cfb20db206011453071e86b69ca3fde706a4
      script: |
      #!/usr/bin/env bash
      echo "Please insert script to invoke/execute dynamic scan tool like OWASP ZAP on the built and deployed application."

For more information about the stages, see Custom scripts.

Getting support

  • You can review Stack Overflow to see whether other users ran into the same problem. When using the forum to ask a question, tag your question with "ibm-cloud" and "DevSecOps" so that it is seen by the IBM Cloud development teams.
  • IBM Cloud's AI assistant, which is powered by IBM's watsonx, is designed to help you learn about working in IBM Cloud and building solutions with the available catalog of offerings. See Getting help from the AI assistant.
  • If you still can't resolve the problem, you can open a support case. For information about opening a support case, or about case severities and response times, see Working with support cases or Escalating support cases.