Working with Tekton pipelines
Tekton Pipelines is an open source project that you can use to configure and run Continuous Integration and Continuous Delivery pipelines within a Kubernetes cluster. Tekton pipelines are defined in yaml files, which are typically stored in a Git repository (repo).
Tekton provides a set of Custom Resources extensions to Kubernetes to define pipelines. The following basic Tekton Pipeline resources are included in these extensions:
Resource | Description |
---|---|
Task |
Defines a set of build steps such as compiling code, running tests, and building and deploying images. |
TaskRun |
Instantiates a Task for execution with specific inputs, outputs, and execution parameters. You can start the task on its own or as part of a pipeline. |
Pipeline |
Defines the set of tasks that compose a pipeline. |
PipelineRun |
Instantiates a Pipeline for execution with specific inputs, outputs, and execution parameters. |
You can take advantage of the following features when you use Tekton Pipelines:
- Cloud Native: Tekton Pipelines run on Kubernetes, use Kubernetes clusters as a first class type, and use containers as their building blocks.
- Decoupled: You can use one pipeline to deploy to any Kubernetes cluster. You can run the tasks that comprise a pipeline in isolation. And you can switch resources (such as Git repos) between pipeline runs.
- Typed: You can switch implementations for specific types of resources, such as images.
The Tekton Pipelines project is a beta release. You must update your pipeline with each new version of Tekton. For more information about the latest version of Tekton, see https://github.com/tektoncd/pipeline/releases.
IBM Cloud® Continuous Delivery provides two types of delivery pipelines that you can use to build, test, and deploy your applications.
- Classic: Classic delivery pipelines are created graphically, with the status embedded in the pipeline diagram. These pipelines can run on shared workers in the cloud or on private workers that run on your own Kubernetes cluster.
- Tekton: Tekton delivery pipelines are created within yaml files that define pipelines as a set of Kubernetes resources. You can edit those yaml files to change the behavior of a pipeline. Tekton pipelines can run on private workers that run on your own cluster. They can also run on IBM-managed workers on the public cloud. The Tekton integration provides a dashboard that you can use to view the status of Tekton pipeline runs and trigger new runs. It also provides mechanisms for specifying the pipeline triggers, the pipeline definitions, the worker on which the pipeline is run, and the pipeline properties.
Both types of pipelines isolate jobs or steps from one another by running in separate containers, and by using an image that you choose. Classic and Tekton pipelines both exist in a toolchain and depend on that toolchain to add more tool integrations that are used in the build, test, and deployment process.
On 20 November 2020, Dockerhub introduced rate-limiting on anonymous image pulls. This change might impact users that are running tasks that reference Dockerhub-hosted images. It is recommended that you use an alternative registry, such as IBM Cloud Container Registry.
Prerequisites
Before you add and run a Tekton pipeline, make sure that you have the following resources in place:
-
A toolchain that contains the following tool integrations:
- A repo tool integration (such as the GitHub tool integration) that contains your Tekton pipeline code, including a Tekton yaml file. Find sample pipeline and task definitions on GitHub. For more information about getting started with Tekton pipelines, see Tekton Pipelines.
- Optional. If you are not using the default shared Pipeline Worker, you can use a Delivery Pipeline Private Worker tool integration that references your Kubernetes cluster. For more information about private workers, see Installing Delivery Pipeline Private Workers.
-
IBM Cloud CLI installed locally.
-
kubectl installed locally.
-
A Kubernetes cluster (version 1.22 or higher) such as an IBM Cloud® Kubernetes Service cluster.
The toolchain and the Delivery Pipeline Private Worker tool integration must be in the same region.
Creating a Delivery Pipeline for Tekton by using the console
When you configure a Delivery Pipeline tool integration, you can select the type of pipeline that you want to create.
-
If you don't have a toolchain, select a template to create a toolchain. Depending on the template that you use, different fields might be available. Review the default field values and if needed, change those settings.
-
If you have a toolchain and are adding this tool integration to it, from the IBM Cloud console, click the Menu icon > Platform Automation > Toolchains. On the Toolchains page, click the toolchain to open its Overview page. Alternatively, on your app's Overview page, on the Continuous delivery card, click View toolchain. Then, click Overview.
-
Add the Delivery Pipeline integration to your toolchain:
a. Click Add tool.
b. In the Tool Integrations section, click Delivery Pipeline.
-
Specify a name for your new pipeline.
-
Select Tekton to create a Tekton Delivery Pipeline. You can view the output of Tekton pipeline runs on a defined Kubernetes cluster, with support for configuring the pipeline definitions repos, the pipeline triggers, where the pipeline runs, and simple secrets.
-
If you plan to use your pipeline to deploy a user interface, select the Show apps in the View app menu checkbox. All of the apps that your pipeline creates are shown in the View App list on the toolchain's Overview page.
-
Click Create Integration to add the Delivery Pipeline to your toolchain.
Configuring a Delivery Pipeline for Tekton by using the console
-
From your toolchain's Overview page, on the Delivery pipelines card, click the Delivery Pipeline to open the Tekton Delivery Pipeline Overview page.
-
Click Settings. In the Definitions section, complete the following tasks:
a. Specify the Git repo and URL that contains the Tekton pipeline definition and related artifacts. If your repo is not available, return to the toolchain Overview page and add your repo.
b. Select the branch within your Git repo that you want to use, or type a tag.
c. Specify the path to your pipeline definition within the Git repo. You can reference a specific definition within the same repo. You can also add multiple definition repos, if they are integrated with the toolchain.
d. Save your changes.
The pipeline definition is updated automatically.
The computed pipeline definition size limit is 1 MB. If you encounter errors when you save or run your pipeline, you might need to reduce the size of your pipeline definition, or split it into multiple pipelines.
-
In the Worker section, select the IBM Managed shared worker or the private worker that you want to use to run your Tekton pipeline. For more information about private workers, see Working with Delivery Pipeline Private Workers.
The private worker must be defined in the same toolchain as your Tekton pipeline.
-
In the Environment properties section, click Add and select a property type to define your own environment property. For example, you can define an
API_KEY
property that passes an API key that is used by all of the scripts in the pipeline to access IBM Cloud resources. You can add the following types of properties:- Enumeration: A property key with a value that can be selected from a user-defined list of options.
- Secure value: A property key with a single-line value that is secured with AES-128 encryption. This value is displayed by using the asterisk character. Alternatively, you can click the key icon to select a secret from a vault integration (such as IBM Key Protect), if such a tool is available in your toolchain.
- Text value: A property key with a text value that can either be single-line or multi-line. Previously, multi-line values were supported by a separate Text area property type.
- Tool integration: A property key with a value that is resolved at run time from a toolchain tool integration. By default, the value is a JSON string representation of the tool integration. A specific field or subset
of the object can be retrieved by providing a value for the optional JSON filter. For example, if a GitHub integration is selected and the JSON filter
parameters.repo_url
is specified, the value reflects the URL of the Git repo that is configured in the tool integration when thePipelineRun
resource runs.
You can access these properties in your Tekton pipeline resources. For more information about these properties, see Tekton Pipelines environment and resources.
Properties can be locked to prevent them from being overridden. Attempting to override a locked property at runtime will result in the run request being rejected. Locked properties are not displayed by default in the run side panel but can be displayed read-only by enabling the 'Show all properties' option.
-
Click Save.
-
On the Pipeline Overview page, click Add to create a trigger, select the type of trigger to add, and associate the trigger with an event listener. The list of available event listeners contains the listeners that are defined in the pipeline code repo.
Triggers are based on Tekton trigger definitions. Git repo triggers use the event listener that they are mapped to to extract information from the incoming event payload and create Kubernetes resources. These resources are applied to a Tekton
PipelineRun
resource.Triggered pipeline runs are run concurrently unless you configure the trigger to serialize runs by using the
Limit concurrent runs
option. When this option is enabled, you can limit the number of simultaneous runs that can be started by this trigger. For example, if the maximum limit is set to 1, only one pipeline run for this trigger runs at a time and any others are queued in a waiting state. A maximum of 10 runs (4 if you are using IBM Managed Workers) are queued in a waiting state before subsequent requests are automatically cancelled.Manual triggers run when you click the Run pipeline button and select the trigger.
Git repository triggers run when the specified Git event type occurs for the specified Git repo and branch.
You can access the webhook payload that is delivered to a Git trigger from your Tekton pipeline resources. Although the exact fields are repo-specific, the general syntax for the webhook payload is
$(event.payloadFieldName)
. Before you can create a webhook, you must authorize Git Admin access for the corresponding Git integration. To authorize Git Admin access, configure and save the Git integration again.Timed triggers run at a scheduled time that is defined by the CRON value. The CRON expression for timed triggers is based on the UNIX crontab syntax and is a sequence of five time and date fields:
minute
,hour
,day of the month
,month
, andday of the week
. These fields are separated by spaces in the formatX X X X X
. The maximum frequency for a timed trigger is once every five minutes. The following examples show strings that use various timed frequencies.*/5 * * * *
- The trigger runs every 5 minutes.0 * * * *
- The trigger runs at the start of every hour.0 9 * 1 MON-FRI
- The trigger runs at 9:00 AM every weekday in January.0 * * NOV,DEC 1
- The trigger runs every hour on Mondays during November and December.
Generic webhook triggers run when a POST request that is configured with the secret setting goes to the generic webhook URL. Generic webhook triggers provide a unique webhook URL for POST requests.
Because the PipelineRun UI does not hide the generic webhook payload values in the event payload section, do not include sensitive data in the payload. Instead, secure any data that is required by a generic webhook by using trigger properties, such as passwords or API key secrets.
You can secure generic webhook triggers to work with Git, a Slack outgoing webhook, an Artifactory webhook, and more by using any of the following methods:
- Token matches to compare the saved token and the token that is passed within the POST request. Supported token sources include a header, query, or payload. Token matches are used by GitLab webhooks and Slack outgoing webhooks.
- Payload digest matches to compare the signature and the hash that are generated from the digested payload by using HMAC hex digest with a saved token. Supported signature sources might include a header, query, or payload. Users must specify a digest algorithm. Payload digest matches are used by GitHub webhooks.
- Tekton task validation requires users to validate the webhook request within their Tekton tasks.
Specify the following values to use generic webhook triggers with GitHub webhooks:
- Securing:
Payload Digest Matches
- Signature Source:
Header
- Header Key Name:
X-Hub-Signature
- Digest Algorithm:
sha1
.
Specify the following values to use generic webhook triggers with GitLab webhooks:
- Securing:
Token Matches
- Token Source:
Header
- Header Key Name:
X-Gitlab-Token
Specify the following values to use generic webhook triggers with Slack outgoing webhooks:
- Securing:
Token Matches
- Token Source:
Payload
- JSON Property Name / Form Key:
token
The following example shows how to use the curl command with a generic webhook that is secured with a
Token Matches
rule:curl -X POST \ https://devops-api.us-south.devops.cloud.ibm.com/v1/tekton-webhook/588236be-749b-4c67-ae57-a561abbbc9a8/run/7e82880e-4223-4c98-8ca9-ef6df36bb6dc \ -H 'Content-Type: application/json' \ -H 'token: 48a0f92c0932890048596906a22ae189c48c5619fbcf9600' \ -d '{ "somekey": "somevalue" }'
To obtain payload values in the pipeline definition, specify a Triggerbinding parameter with a value that is derived from the event:
apiVersion: tekton.dev/v1beta1 kind: TriggerBinding metadata: name: binding spec: params: - name: somekey value: $(event.somekey)
Save your changes.
Configuring Delivery Pipeline triggers for Tekton pipelines
You can configure triggers for Tekton pipelines based on various events in your Git repo. Filter Git triggers by using the following options:
- Branch: Trigger the pipeline for a specific branch of the selected repo when the specified event occurs.
- Pattern: Trigger the pipeline based on a glob match against tags and branch names in the selected repo when the specified event occurs.
- CEL filter: Trigger the pipeline when the event matches the provided Common Expression Language (CEL) filter.
Use the Branch and Pattern options to specify events such as commit push
, pull request opened
, updated
, or closed
. Also, you can specify pull request events
by switching the Include draft pull request events option to allow or skip pipeline triggers for draft pull requests. Similarly, you can specify if you want to allow pipeline triggers for pull requests from forked repositories
by using the Include pull request events from forks toggle. Additionally you can select the Label filters option to enable filtering based on pull request labels according to user-defined criteria in the
filters table.
The CEL filter option supports more advanced use cases, such as matching against other fields in the event payload. This option supports push events, all pull request events, issues events, issue comments events, and release events. This option is also available as an optional feature on the Generic Webhook trigger to provide event filtering based on the webhook payload.
CEL overview
CEL is a powerful and flexible expression language designed to evaluate conditions and perform validations in a concise and readable manner. CEL is ideally suited for use cases that require complex conditional logic, such as filtering events.
In Tekton pipeline, the CEL option is introduced to provide more powerful and flexible event filtering. The webhook payload is evaluated against the CEL expression that is provided by the user. If the CEL expression evaluates to true
,
the pipeline run is triggered.
The following features are supported in CEL:
- Arithmetic operators (
+
,-
,*
,/
,%
) - Comparison operators (
=
,!=
,<
,>
,<=
,>=
) - Logical operators (
&&
,||
) - String operators (
contains
,matches
,startsWith
,endsWith
) - Collection operators (
in
,!in
) - Variables (refer to variables directly by their names)
- Literals (support literals such as strings, numbers, booleans, and null)
CEL includes the following extensions to provide more functionality to the base CEL language:
Sets extension
to support advanced set operations and provide more flexibility in event filtering. For more information about this extension, see Sets.matchesGlob
to provide compatibility when converting the existing pattern field to the new CEL filter option. The native CELmatches
operator is recommended for more advanced regular expression matching.
For more information about CEL, see the CEL documentation.
Converting to CEL
Complete the following steps to convert your existing event filtering selection to a CEL expression:
-
Edit the Git trigger that you want to convert.
-
In the Trigger on section, select the CEL filter option.
The following elements are automatically converted into an equivalent CEL expression:
- Branch or Pattern
- Events, such as
commit push
,pull request opened
,updated
, andclosed
- Include draft pull request events
- Include pull request events from forks
- Label filters
The generated CEL expression is written into a text area field, which you can edit as needed.
Because no filters exist on Generic Webhook triggers for conversion, the conversion to a CEL filter applies only to Git triggers.
If you save the trigger with the CEL option selected, it replaces the previously selected events with the CEL expression. If you switch to the Branch or Pattern option after you saves the CEL filter option, your previous event selections are not saved. Conversion from the CEL option to the Branch or Pattern option is not supported.
CEL expression examples
The following examples are common CEL expressions for each of the supported Git types: GitHub
, GitLab
and BitBucket
. You can copy and modify these examples to meet your requirements.
GitHub examples:
Run when a pull request is opened or updated against the specified branch:
header['x-github-event'] == 'pull_request' &&
(body.action == 'opened' || body.action == 'synchronize') &&
body.pull_request.base.ref == 'main'
Run when a commit is pushed to the specified branch:
header['x-github-event'] == 'push' && body.ref == 'refs/heads/main'
Run when a commit is pushed to the specified branch but skip it when the commit message contains a specific string:
header['x-github-event'] == 'push' &&
body.ref == 'refs/heads/main' &&
!body.head_commit.message.contains("skip run")
Run when a comment containing the specified string is added to a pull request:
header['x-github-event'] == 'issue_comment' &&
body.action == 'created' && has(body.issue.pull_request) &&
body.comment.body.contains('/lgtm')
Run when an issue is created with the specified label:
header['x-github-event'] == 'issues' &&
body.action == 'opened' &&
body.issue.labels.exists(label, label.name == 'urgent')
GitLab examples:
Run when a merge request is opened or updated against the specified branch:
header['x-gitlab-event'] == 'Merge Request Hook' &&
(body.object_attributes.action == 'open' || body.object_attributes.action == 'update') &&
body.object_attributes.target_branch == 'main'
Run when a commit is pushed to the specified branch:
header['x-gitlab-event'] == 'Push Hook' && body.ref == 'refs/heads/main'
Run when a commit is pushed to the specified branch but skip it when the commit message contains a specific string:
header['x-gitlab-event'] == 'Push Hook' &&
body.ref == 'refs/heads/main' &&
!body.object_attributes.last_commit.message("skip run")
Run when a comment containing the specified string is added to a merge request:
header['x-gitlab-event'] == 'Note Hook' &&
body.object_attributes.noteable_type == 'MergeRequest' &&
body.object_attributes.action == 'create' &&
body.object_attributes.note.contains('/lgtm')
Run when an issue is created with the specified label:
header['x-gitlab-event'] == 'Issue Hook' &&
(body.object_attributes.action == 'open') &&
body.object_attributes.labels.exists(label, label.name == 'urgent')
BitBucket examples:
Run when a pull request is opened or updated against the specified branch:
(header['x-event-key'] == 'pullrequest:created' || header['x-event-key'] == 'pullrequest:updated') &&
body.pullrequest.destination.branch.name == 'main'
Run when a commit is pushed to the specified branch:
header['x-event-key'] == 'repo:push' && body.push.changes[0].new.name == 'main'
Run when a commit is pushed to the specified branch but skip it when the commit message contains a specific string:
header['x-event-key'] == 'repo:push' &&
body.push.changes[0].new.name == 'main' &&
!body.push.changes[0].commits[0].message("skip run")
Run when a comment containing the specified string is added to a pull request:
header['x-event-key'] == 'pullrequest:comment_created' &&
body.comment.content.raw.contains('/lgtm')
Run when an issue is created with the specified label:
header['x-event-key'] == 'issue:created' &&
body.issue.kind == 'bug'
Checking the event payload
When you write CEL expressions for event filtering, you must understand the structure and content of the webhook payload against which the expression will be evaluated. You can inspect the payload for an existing run from the Pipeline Run details page.
To view the event payload, go to the Pipeline Run details page and click Show context. You can view the raw webhook payload that triggered the pipeline run and confirm the relevant fields for your CEL expressions to match the conditions that you want.
Creating a Delivery Pipeline for Tekton with the API
-
Obtain an IAM bearer token. Alternatively, if you are using an SDK, obtain an IAM API key and set the client options by using environment variables.
export CD_TEKTON_PIPELINE_APIKEY={api_key}
-
Determine the region and ID of the toolchain to which you want to add the Delivery Pipeline tool integration.
-
Add the Delivery Pipeline tool integration to the toolchain.
curl -X POST \ https://api.{region}.devops.cloud.ibm.com/toolchain/v2/toolchains/{toolchain_id}/tools \ -H 'Authorization: Bearer {iam_token}' \ -H 'Accept: application/json` \ -H 'Content-Type: application/json' \ -d '{ "tool_type_id": "pipeline", "parameters": { "name": "{tool_integration_name}", "type" : "tekton" } }'
const CdToolchainV2 = require('@ibm-cloud/continuous-delivery/cd-toolchain/v2'); ... (async () => { const toolchainService = CdToolchainV2.newInstance(); const pipelinePrototypeModel = { toolchainId: {toolchain_id}, toolTypeId: 'pipeline', name: {tool_integration_name}, type: "tekton" }; const pipelineTool = await toolchainService.createTool(pipelinePrototypeModel); })();
import ( "github.com/IBM/continuous-delivery-go-sdk/cdtoolchainv2" ) ... toolchainClientOptions := &cdtoolchainv2.CdToolchainV2Options{} toolchainClient, err := cdtoolchainv2.NewCdToolchainV2UsingExternalConfig(toolchainClientOptions) createPipelineToolOptions := toolchainClient.NewCreateToolOptions({toolchain_id}, "pipeline") createPipelineToolOptions.SetName({tool_integration_name}) createPipelineToolOptions.SetType("tekton") pipelineTool, response, err := toolchainClient.CreateTool(createPipelineToolOptions)
from ibm_continuous_delivery.cd_toolchain_v2 import CdToolchainV2 ... toolchain_service = CdToolchainV2.new_instance() pipeline_tool = toolchain_service.create_tool( name = {tool_integration_name}, toolchain_id = {toolchain_id}, tool_type_id = "pipeline", type = "tekton" )
import com.ibm.cloud.continuous_delivery.cd_toolchain.v2.CdToolchain; import com.ibm.cloud.continuous_delivery.cd_toolchain.v2.model.*; ... CdToolchain toolchainService = CdToolchain.newInstance(); CreateToolOptions createPipelineToolOptions = new CreateToolOptions.Builder() .name({tool_integration_name}) .toolchainId({toolchain_id}) .toolTypeId("pipeline") .type("tekton") .build(); Response<ToolchainToolPost> response = toolchainService.createTool(createPipelineToolOptions).execute(); ToolchainToolPost pipelineTool = response.getResult();
The following table lists and describes each of the variables that are used in the previous step.
Variables for adding the Delivery Pipeline tool integration with the API Variable Description {region}
The region in which the toolchain resides, for example, us-south
.{tool_integration_name}
A name for your tool integration, for example, ci-pipeline
.{toolchain_id}
The ID of the toolchain to which to add the tool integration. {iam_token}
A valid IAM bearer token. -
Configure the Delivery Pipeline to use public managed workers within the specified regions.
curl -X POST \ https://api.{region}.devops.cloud.ibm.com/pipeline/v2/tekton_pipelines \ -H 'Authorization: Bearer {iam_token}' \ -H 'Accept: application/json` \ -H 'Content-Type: application/json' \ -d '{ "id": "{pipeline_id}", "worker": { "id": "public" } }'
const CdTektonPipelineV2 = require('@ibm-cloud/continuous-delivery/cd-tekton-pipeline/v2'); ... (async () => { const tektonService = CdTektonPipelineV2.newInstance(); const workerIdentityModel = { id: 'public', }; const params = { id: {pipeline_id}, worker: workerIdentityModel, }; const res = await tektonService.createTektonPipeline(params); })();
import { "github.com/IBM/continuous-delivery-go-sdk/cdtektonpipelinev2" } ... cdTektonPipelineOptions := &cdtektonpipelinev2.CdTektonPipelineV2Options{} pipelineSvc, err = cdtektonpipelinev2.NewCdTektonPipelineV2UsingExternalConfig(cdTektonPipelineOptions) createTektonPipelineOptions := pipelineSvc.NewCreateTektonPipelineOptions( {pipeline_id} ) workerIdentityModel := &cdtektonpipelinev2.WorkerIdentity{ ID: core.StringPtr("public"), } createTektonPipelineOptions.SetWorker(workerIdentityModel) tektonPipeline, response, err := pipelineSvc.CreateTektonPipeline(createTektonPipelineOptions)
from ibm_continuous_delivery.cd_tekton_pipeline_v2 import CdTektonPipelineV2 ... pipeline_service = CdTektonPipelineV2.new_instance() worker_identity_model = { 'id': 'public', } response = pipeline_service.create_tekton_pipeline( id = {pipeline_id}, worker = worker_identity_model ) tekton_pipeline = response.get_result()
import com.ibm.cloud.continuous_delivery.cd_tekton_pipeline.v2.CdTektonPipeline; import com.ibm.cloud.continuous_delivery.cd_tekton_pipeline.v2.model.*; ... CdTektonPipeline pipelineSvc = CdTektonPipeline.newInstance(); WorkerIdentity workerIdentityModel = new WorkerIdentity.Builder() .id("public") .build(); CreateTektonPipelineOptions createTektonPipelineOptions = new CreateTektonPipelineOptions.Builder() .id({pipeline_id}) .worker(workerIdentityModel) .build(); Response<TektonPipeline> response = pipelineSvc.createTektonPipeline(createTektonPipelineOptions).execute(); TektonPipeline tektonPipeline = response.getResult();
The following table lists and describes each of the variables that are used in the previous step.
Variables for configuring the Delivery Pipeline with the API Variable Description {region}
The region in which the toolchain resides, for example, us-south
.{pipeline_id}
The ID of the pipeline that is returned from the previous step where the pipeline tool integration was created. {iam_token}
A valid IAM bearer token.
For more information about the Delivery Pipeline API, see the API Docs.
Creating a Delivery Pipeline for Tekton with Terraform
-
To install the Terraform CLI and configure the IBM Cloud provider plug-in for Terraform, follow the tutorial for Getting started with Terraform on IBM Cloud®.
-
Create a Terraform configuration file that is named
main.tf
. In this file, add the configuration to create a pipeline by using the HashiCorp Configuration Language. For more information about using this configuration language, see the Terraform documentation.A pipeline must belong to a toolchain. You can also create toolchains by using Terraform.
The following example creates a toolchain and a pipeline by using the specified Terraform resources.
data "ibm_resource_group" "group" { name = "default" } resource "ibm_cd_toolchain" "my_toolchain" { name = "terraform_toolchain" resource_group_id = data.ibm_resource_group.group.id } resource "ibm_cd_toolchain_tool_pipeline" "my_pipeline_tool" { parameters { name = "terraform-pipeline-integration" } toolchain_id = ibm_cd_toolchain.my_toolchain.id } resource "ibm_cd_tekton_pipeline" "my_tekton_pipeline" { worker { id = "public" } pipeline_id = ibm_cd_toolchain_tool_pipeline.my_pipeline_tool.tool_id }
For more information about the
ibm_cd_toolchain_tool_pipeline
andibm_cd_tekton_pipeline
resources, see the argument reference details in the Terraform Registry Documentation. -
Initialize the Terraform CLI, if required.
terraform init
-
Create a Terraform execution plan. This plan summarizes all of the actions that must run to create a toolchain.
terraform plan
-
Apply the Terraform execution plan. Terraform takes all of the required actions to create the toolchain.
terraform apply
Viewing a Delivery Pipeline for Tekton
You can view a pipeline by using the console UI, with the API, or with Terraform.
Viewing a Delivery Pipeline by using the console
The Tekton Delivery Pipeline Overview page displays an empty table until at least one trigger is added. After Tekton pipeline runs occur (either manually or as the result of external events), the table displays data about the recent runs that
are associated with each trigger in the pipeline. Each row shows information about a single trigger and displays a graph of recent runs that are associated with that trigger. Information such as the success or failure of those runs, and
the time when the most recent run occurred is also displayed. You can also perform actions for each trigger: run the trigger manually, mark it as a favorite, edit the trigger, enable or disable it, or delete it. You can also click one of
the items in the graph to inspect the details of that individual PipelineRun
. Or, you can click a trigger name to open the PipelineRuns page to every PipelineRun
that is associated with that trigger. Related information
such as the status, trigger, and duration of each PipelineRun
is also available.
Pipeline runs can be in any of the following states:
- Pending:
PipelineRun
is requested. - Running:
PipelineRun
is running on the cluster. - Succeeded:
PipelineRun
successfully completed on the cluster. - Failed:
PipelineRun
failed. Review the log file for the run to determine the cause. - Queued:
PipelineRun
is accepted for processing and runs when worker capacity is available. - Waiting:
PipelineRun
is waiting to be queued. - Cancelled:
PipelineRun
was cancelled by the system or by the user. The system cancelsPipelineRun
when the number of waiting runs exceeds the allowed limit. - Error:
PipelineRun
contains errors that prevented it from being applied on the cluster. For more information about the cause of the error, see the run details.
For detailed information about a selected run, click any row in the table to view the Task
definition and the steps in each PipelineRun
definition. You can also view the status, logs, and details of each Task
definition and step, and the overall status of the PipelineRun
definition.
The retention period for PipelineRuns and their logs depends on the plan that is selected for the Continuous Delivery service instance. Tekton pipelines under the Professional plan are retained for one year. Tekton pipelines under the Lite
plan are retained for 30 days. To retain any PipelineRuns
beyond the retention period, in the PipelineRuns section, select **Actions > Download to download a .zip file.
Viewing a Delivery Pipeline with the API
-
Obtain an IAM bearer token. Alternatively, if you are using an SDK, obtain an IAM API key and set the client options by using environment variables.
export CD_TEKTON_PIPELINE_APIKEY={api_key}
-
Get the pipeline data.
curl -X GET \ https://api.{region}.devops.cloud.ibm.com/pipeline/v2/tekton_pipelines/{pipeline_id} \ -H 'Authorization: Bearer {iam_token}' \ -H 'Accept: application/json`
const CdTektonPipelineV2 = require('@ibm-cloud/continuous-delivery/cd-tekton-pipeline/v2'); ... (async () => { const pipelineSvc = CdTektonPipelineV2.newInstance(); const params = { id: {pipeline_id}, }; const res = await pipelineSvc.getTektonPipeline(params); })();
import { "github.com/IBM/continuous-delivery-go-sdk/cdtektonpipelinev2" } ... cdTektonPipelineOptions := &cdtektonpipelinev2.CdTektonPipelineV2Options{} pipelineSvc, err = cdtektonpipelinev2.NewCdTektonPipelineV2UsingExternalConfig(cdTektonPipelineOptions) getTektonPipelineOptions := pipelineSvc.NewGetTektonPipelineOptions( {pipeline_id} ) tektonPipeline, response, err := pipelineSvc.GetTektonPipeline(getTektonPipelineOptions)
from ibm_continuous_delivery.cd_tekton_pipeline_v2 import CdTektonPipelineV2 ... pipeline_service = CdTektonPipelineV2.new_instance() response = pipeline_service.get_tekton_pipeline( id = {pipeline_id} ) tekton_pipeline = response.get_result()
import com.ibm.cloud.continuous_delivery.cd_tekton_pipeline.v2.CdTektonPipeline; import com.ibm.cloud.continuous_delivery.cd_tekton_pipeline.v2.model.*; ... CdTektonPipeline pipelineSvc = CdTektonPipeline.newInstance(); GetTektonPipelineOptions getTektonPipelineOptions = new GetTektonPipelineOptions.Builder() .id({pipeline_id}) .build(); Response<TektonPipeline> response = pipelineSvc.getTektonPipeline(getTektonPipelineOptions).execute(); TektonPipeline tektonPipeline = response.getResult();
The following table lists and describes each of the variables that are used in the previous step.
Variable | Description |
---|---|
{region} |
The region in which the pipeline resides, for example, us-south . |
{pipeline_id} |
The ID of the pipeline that you want to view. |
{iam_token} |
A valid IAM bearer token. |
Viewing a Delivery Pipeline with Terraform
-
Locate the Terraform file (for example,
main.tf
) that contains theresource
block for the existing pipeline. -
Add an
output
block to the Terraform file, if it does not already contain a block.The
resource
in the following example describes an existing pipeline. Theoutput
block instructs Terraform to output the attributes of the specified resource.data "ibm_resource_group" "group" { name = "default" } resource "ibm_cd_toolchain" "my_toolchain" { name = "terraform_toolchain" resource_group_id = data.ibm_resource_group.group.id } resource "ibm_cd_toolchain_tool_pipeline" "my_pipeline_tool" { parameters { name = "terraform-pipeline-integration" } toolchain_id = ibm_cd_toolchain.my_toolchain.id } resource "ibm_cd_tekton_pipeline" "my_tekton_pipeline" { worker { id = "public" } pipeline_id = ibm_cd_toolchain_tool_pipeline.my_pipeline_tool.tool_id } output "my_tekton_pipeline_attributes" { value = ibm_cd_tekton_pipeline.my_tekton_pipeline }
For more information about the
ibm_cd_toolchain_tool_pipeline
andibm_cd_tekton_pipeline
resources, see the argument reference details in the Terraform Registry Documentation. -
Initialize the Terraform CLI, if required.
terraform init
-
Apply the Terraform execution plan with the
refresh-only
option. Terraform refreshes its state and displays the attributes of the pipeline resource.terraform apply -refresh-only -auto-approve
Deleting a Delivery Pipeline with the API
-
Obtain an IAM bearer token. Alternatively, if you are using an SDK, obtain an IAM API key and set the client options by using environment variables.
export CD_TEKTON_PIPELINE_APIKEY={api_key}
-
Determine the region and ID of the toolchain that you want to add the DevOps Insights tool integration to.
-
Delete the pipeline.
curl -X DELETE \ https://api.{region}.devops.cloud.ibm.com/toolchain/v2/toolchains/{toolchain_id}/tools/{pipeline_id} \ -H 'Authorization: Bearer {iam_token}'
const CdTektonPipelineV2 = require('@ibm-cloud/continuous-delivery/cd-tekton-pipeline/v2'); ... (async () => { const pipelineSvc = CdTektonPipelineV2.newInstance(); const params = { id: {pipeline_id}, }; const res = await pipelineSvc.deleteTektonPipeline(params); })();
import { "github.com/IBM/continuous-delivery-go-sdk/cdtektonpipelinev2" } ... cdTektonPipelineOptions := &cdtektonpipelinev2.CdTektonPipelineV2Options{} pipelineSvc, err = cdtektonpipelinev2.NewCdTektonPipelineV2UsingExternalConfig(cdTektonPipelineOptions) deleteTektonPipelineOptions := pipelineSvc.NewDeleteTektonPipelineOptions( {pipeline_id} ) response, err := pipelineSvc.DeleteTektonPipeline(deleteTektonPipelineOptions)
from ibm_continuous_delivery.cd_tekton_pipeline_v2 import CdTektonPipelineV2 ... pipeline_service = CdTektonPipelineV2.new_instance() response = pipeline_service.delete_tekton_pipeline( id={pipeline_id} )
import com.ibm.cloud.continuous_delivery.cd_tekton_pipeline.v2.CdTektonPipeline; import com.ibm.cloud.continuous_delivery.cd_tekton_pipeline.v2.model.*; ... CdTektonPipeline pipelineSvc = CdTektonPipeline.newInstance(); DeleteTektonPipelineOptions deleteTektonPipelineOptions = new DeleteTektonPipelineOptions.Builder() .id({pipeline_id}) .build(); Response<Void> response = pipelineSvc.deleteTektonPipeline(deleteTektonPipelineOptions).execute();
The following table lists and describes each of the variables that are used in the previous step.
Variable | Description |
---|---|
{region} |
The region in which the toolchain resides, for example, us-south . |
{toolchain_id} |
The ID of the toolchain that contains the pipeline to delete. |
{pipeline_id} |
The ID of the pipeline that you want to delete. |
{iam_token} |
A valid IAM bearer token. |
Deleting a Delivery Pipeline with Terraform
-
Locate the Terraform file (for example,
main.tf
) that contains theresource
block for the existing pipeline.The
resource
in the following example describes an existing pipeline.data "ibm_resource_group" "group" { name = "default" } resource "ibm_cd_toolchain" "my_toolchain" { name = "terraform_toolchain" resource_group_id = data.ibm_resource_group.group.id } resource "ibm_cd_toolchain_tool_pipeline" "my_pipeline_tool" { parameters { name = "terraform-pipeline-integration" } toolchain_id = ibm_cd_toolchain.my_toolchain.id } resource "ibm_cd_tekton_pipeline" "my_tekton_pipeline" { worker { id = "public" } pipeline_id = ibm_cd_toolchain_tool_pipeline.my_pipeline_tool.tool_id }
-
Remove the
ibm_cd_toolchain_tool_pipeline
andibm_cd_tekton_pipeline
resource
blocks from your Terraform file. -
Initialize the Terraform CLI, if required.
terraform init
-
Create a Terraform execution plan. This plan summarizes all of the actions that must run to delete the pipeline.
terraform plan
-
Apply the Terraform execution plan. Terraform takes all of the required actions to delete the pipeline.
terraform apply
Viewing details for a TaskRun pod
To view information about the underlying Kubernetes pod for a specific TaskRun
, click the Task
name and then click Pod.
You can view the details for the pod and any related events that are reported by the worker. This information can help you to debug specific failures or to determine where time is spent during a run.
Learn more about Tekton pipelines and resources
To learn more about Tekton pipelines, see the Tekton: A Modern Approach to Continuous Delivery and IBM Cloud Continuous Delivery Tekton Pipelines Tools and Resources articles.
To learn more about the Tekton tasks that you can reference within your pipelines, see the Open Toolchain Tekton Catalog. This GitHub repo contains a set of tasks that you can reuse in your Tekton pipelines.