IBM Cloud Docs
Writing and viewing logs for apps, jobs, and functions

Writing and viewing logs for apps, jobs, and functions

Logging can help you troubleshoot issues in IBM Cloud® Code Engine. You can view logs by using the console or by using the CLI.

Interested in logging for fleets? See Setting up observability for fleets and Viewing logs and monitoring data for fleets.

Writing logs

Learn how to write logs effectively in IBM Cloud® Code Engine, including best practices for log formats, severity levels, timestamps, and handling multi-line entries for both unstructured and structured logging.

Considerations for writing logs

Writing logs to standard output and error

In Code Engine, log records emitted by your workload are collected only when they are written to stdout or stderr, following the Twelve‑Factor App guidelines which recommend treating logs as event streams rather than managing log files. See Creating cloud-native applications: 12-factor applications - Factor 11 - Logs.

The platform's logging pipeline automatically captures and processes this output, making it available for analysis and troubleshooting. Log lines written to files in the container's ephemeral file system aren't ingested, persisted, or exposed through the logging interface. As a result, any logs stored on the ephemeral file system are lost when the instance is restarted or terminated and aren't available for operational debugging or root‑cause analysis.

Should I add timestamps to my log lines?

Log records generated by user workloads should avoid embedding their own timestamp information because the Code Engine infrastructure automatically captures and standardizes timestamps. Including application-level timestamps can create inconsistencies across services, especially when workloads run in distributed or containerized environments where system clocks might drift or differ. Relying on the platform’s timestamps ensures uniform time formats, accurate sequencing, and reliable correlation with other system-generated logs, which simplifies troubleshooting, auditing, and observability across the entire deployment.

How do my log levels map to the IBM Cloud Logs severity?

The log level provided in each log record is mapped to IBM Cloud Logs severities as described in Mapping log severities to IBM Cloud Logs severities. In the following sections, you learn more about how the log levels are parsed for unstructured and structured logs and which log level values are supported.

What if my log data is multi-line?

To take advantage of IBM Cloud Logs search and formatting features, change your log formatting as follows.

  • If your log lines span multiple lines, change how you format and output your logs so that they're in a single line. Use the JSONL format (see Log formats) for your logs with IBM Cloud Logs.
  • Your logs must conform to limits for IBM Cloud Logs.

Log formats

Log data can be emitted in two common formats: unstructured and structured.

  • Unstructured logs are free‑form text that is simple to produce and human‑readable, but difficult to parse consistently for backend systems. This limits reliable filtering and correlation.
  • Structured logs encode fields in a predictable schema (for example, JSON), enabling log pipelines to index and query attributes such as request IDs, user IDs, or domain‑specific metadata. Code Engine supports JSON for structured logs, which helps ensure your custom fields remain machine‑readable and easily filterable across observability tools.

If you plan to enrich log lines with custom, filterable information, use structured logging.

Unstructured logs

Examples

Below are simple examples that write an unstructured log line (free‑form text) to standard output.

The examples are published in the Code Engine public sample repository at https://github.com/IBM/CodeEngine/blob/main/logging/README.md.

Node.js (JavaScript)

console.log('User signup succeeded for account abc123');

Python

print("User signup succeeded for account abc123")

Golang

package main

import (
    "fmt"
)

func main() {
    fmt.Println("User signup succeeded for account abc123")
}

Java

package com.ibm.cloud.codeengine.sample;

public class App {
    public static void main(String[] args) {
        System.out.println("User signup succeeded for account abc123");
    }
}

Log level detection

Each log record is scanned for keywords to determine the severity. The severity values, which are evaluated case‑insensitively, are critical, error, warn, info, debug, and verbose.

If a log line begins with a severity keyword in the format LEVEL MESSAGE, Code Engine removes the detected level from the displayed log message so users can focus on the core content while still filtering precisely by severity in the IBM Cloud Logs view. In this case, the extracted severity value is stored in the log record field level. The following case-insensitive severity levels are supported: fatal, error, warn, info, debug, and trace. For instance, in Node.js you might write:

// Unstructured log with level prefix
console.log('ERROR Payment service timeout while creating invoice');

In IBM Cloud Logs, this entry appears with severity Error and the message text "Payment service timeout while creating invoice"; you can then filter by Severity = Error to narrow results. You can also use parsing and severity rules to tailor how levels are extracted and matched for your environment.

Log level detection also works when log lines follow slightly different formats—such as LEVEL: MESSAGE or [LEVEL] MESSAGE. However, in these cases the level keyword is not removed from the log message, even though the severity is still properly classified.

Additionally, the detection logic can infer a severity level when any supported keyword appears anywhere within the message, not just at the beginning. For example, the log line:

The payment workflow encountered an unexpected error during validation

This log line is classified as Error because the word error appears in the message text.

The log level detection does not consider the input stream (stdout or stderr) for log level detection. Therefore, log messages written to standard error (stderr) are evaluated purely on the message text. For example, console.error("Some message") is classified as Info, although it is written to stderr.

For function workloads, the log level is not removed from the displayed log message, even when it is detected at the beginning of the log line in the format LEVEL MESSAGE.

Timestamp parsing and evaluation

Adding timestamps to application log lines is not recommended because Code Engine automatically assigns a normalized timestamp when logs are ingested. If a timestamp is included at the beginning of a log line, the system attempts to parse it. If it matches one of the supported formats, the timestamp is removed from the displayed log message, similar to how log levels are handled. Supported timestamp formats include:

  • 2026-02-08T20:30:45.123
  • 2026-02-08T20:30:45.123Z
  • 2026-02-08T21:03:45.123456Z
  • 2026-02-08T21:03:45.123456789Z
  • 2026-02-08 21:03:45.123Z
  • 2026-02-08 20:30:45.123

When a log line contains both a timestamp and a log level at the beginning (for example, TIMESTAMP LEVEL MESSAGE), the pipeline evaluates both fields. If both match supported patterns, they are each classified appropriately and removed from the rendered log message, leaving only the message body for easier reading and filtering. For example, the following formats are successfully parsed:

  • 2026-02-08T21:03:45.123456789Z ERROR Payment service timeout
  • 2026-02-08 20:30:45.123 INFO Starting billing workflow

In cases where a timestamp appears but does not match the supported formats, it remains part of the log line and is treated as regular text, but the rest of the message is still processed normally.

For function workloads, timestamps are not parsed, evaluated, or removed from log messages. Any timestamp included in function logs remains part of the displayed message.

Multi-line support

Code Engine supports multi‑line log entries. However, when you emit logs, you must ensure that newline characters (\n) are properly encoded (\\n) so the logging pipeline can correctly process and render multi‑line messages. For example, in Node.js, you can produce a multi‑line log entry like this:

console.log("Starting billing workflow...\\nStep 1: Validating input...\\nStep 2: Processing payment...");

Logging errors

If your workload emits multi‑line error stack traces, use the structured JSON log format instead of unstructured console output. Structured logs preserve multi‑line fields reliably and ensure that stack traces are grouped into a single log record, which is covered in the section Structured logs.

For example, the following log message is classified as Error because the keyword "error" appears in the log message. However, the stack trace that is provided by the err object is rendered in multiple log lines.

try {
  throw new Error("boom!");
} catch (err) {
  console.error("An error occurred", err);
}

Structured logs

Examples

The following are minimal structured‑logging examples for each language and runtime that emit a single JSON with the log level in the level field and the log message in the message field.

The examples are published in the Code Engine public sample repository at https://github.com/IBM/CodeEngine/blob/main/logging/README.md.

Node.js (winston)

import winston from "winston";
const { combine, json } = winston.format;

// Create a custom logger
const logger = winston.createLogger({
  level: 'info',
  transports: [new winston.transports.Console()],
  format: combine(json())
});

// Usage
logger.info("User signup succeeded")
logger.error("Payment service timeout")

Python (Loguru)

from loguru import logger
import sys
import json
import traceback

# Define a custom JSON sink
def json_sink(message):
    record = message.record

    # Base fields: level + message, no timestamp
    payload = {
        "level": record["level"].name,   # e.g., "INFO"
        "message": record["message"],    # rendered message
    }

    # Merge in any bound extra fields as top-level keys
    # (skip reserved keys to avoid accidental overwrite)
    for k, v in record["extra"].items():
        if k not in ("level", "message", "stack"):
            payload[k] = v

    # If an exception is attached, render full stack trace into "stack"
    exc = record["exception"]
    if exc:
        # exc.type, exc.value, exc.traceback are available from Loguru
        stack_text = "".join(traceback.format_exception(exc.type, exc.value, exc.traceback))
        payload["stack"] = stack_text

    # Emit a single JSON line
    sys.stdout.write(json.dumps(payload, ensure_ascii=False) + "\n")
    sys.stdout.flush()


# Remove default handler (which includes timestamp, etc.) and add our custom sink
logger.remove()
logger.add(json_sink, level="DEBUG")  # lowest level you want to capture

# Usage
logger.info("User signup succeeded")
logger.error("Payment service timeout")

Golang (slog)

package main

import (
    "log/slog"
    "os"
)

func main() {
    handler := slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{
        // Remove time and rename msg->message
        ReplaceAttr: func(groups []string, attr slog.Attr) slog.Attr {
            // Drop the time attribute
            if attr.Key == slog.TimeKey {
                return slog.Attr{} // empty => removed
            }
            // Rename msg to message
            if attr.Key == slog.MessageKey {
                return slog.String("message", attr.Value.String())
            }
            return attr
        },
    })
    logger := slog.New(handler)

    // Usage
    logger.Info("User signup succeeded")
    logger.Error("Payment service timeout")
}

Java (SLF4J and Logback with logstash-logback-encoder)

src/main/resources/logback.xml:

<configuration>
    <appender name="jsonConsoleAppender" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="net.logstash.logback.encoder.LogstashEncoder">
            <timeZone>UTC</timeZone>
            <fieldNames>
                <timestamp>[ignore]</timestamp>
                <logger>[ignore]</logger>
                <version>[ignore]</version>
                <levelValue>[ignore]</levelValue>
                <threadName>[ignore]</threadName>
            </fieldNames>
        </encoder>
    </appender>

    <root level="DEBUG">
        <appender-ref ref="jsonConsoleAppender" />
    </root>
</configuration>

src/main/java/com/ibm/cloud/codeengine/sample/App.java:

package com.ibm.cloud.codeengine.sample;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class App {
  private static final Logger logger = LoggerFactory.getLogger(App.class);

  public static void main(String[] args) {
    logger.info("User signup succeeded");
    logger.error("Payment service timeout");
  }
}

Log level detection

Each log record is scanned for keywords to determine the severity. The severity values, which are evaluated case‑insensitively, are critical, error, warn, info, debug, and verbose.

When you use structured logs, Code Engine automatically detects the log level when it is provided in one of the following fields: level, severity, or logLevel. The value in these fields is case‑insensitive, meaning entries such as error, ERROR, or Error all map to the same severity. The following case-insensitive severity levels are supported: critical, error, warn, info, debug, and verbose. When a supported field is present and contains one of these values, the log level is extracted, normalized, and used for filtering and categorization within the Logs UI. For example, the following structured log lines are all properly interpreted with level Error:

{ "level": "error", "message": "Payment service timeout" }
{ "severity": "ERROR", "message": "Failed to connect to database" }
{ "logLevel": "eRrOr", "message": "Workflow aborted" }

Regardless of the capitalization or the specific field value used, the platform correctly identifies the log level and applies it for filtering, grouping, and analytics across your structured log data.

Timestamp evaluation

For structured logs, custom timestamps are not parsed or evaluated. Any timestamp field you provide is treated purely as payload data, while the platform always applies its own ingestion timestamp for ordering and filtering.

In below example, "timestamp" value is preserved but ignored for log timing.

{ "level": "INFO", "message": "Processing started", "timestamp": "2026-02-08T20:30:45.123Z" }

Adding additional context information

You can enrich structured logs with custom fields (for example, requestId, userId, or domain‑specific metadata). The following are minimal examples for each stack used earlier.

Keep custom fields concise and stable (for example, IDs, codes, or small enums) to maximize filterability and minimize cardinality in your logging instance.

Node.js (winston)

logger.debug("A structured log entry that contains an extra key", {
  extra_key: "extra_value",
});

Python (Loguru)

logger.bind(extra_key="extra_value").debug("A structured log entry that contains an extra key")

Golang (slog)

logger.Debug("A structured log entry that contains an extra key",
  slog.String("extra_key", "extra_value"),
)

Java (SLF4J and Logback with logstash-logback-encoder)

logger.atDebug().addKeyValue("extra_key", "extra_value")
                .log("A structured log entry that contains an extra key");

Logging errors

To capture both the error message and its stack trace in structured logs, emit a JSON record that includes your standard fields (level, message) plus a stack (or similar) field.

Node.js (winston)

// Error logging
try {
  throw new Error("boom!");
} catch (err) {
  // The error stack trace is rendered in a single log message (see field stack)
  logger.error("An error occurred", err);
}

Python (Loguru)

try:
  raise RuntimeError("boom!")
except Exception:
  # logger.exception() automatically attaches the current exception info
  logger.exception("An error occurred")

Golang (slog)

err := errors.New("boom!")
logger.Error("An error occurred",
    slog.Any("err", err),
    // The error stack trace is rendered in a single log message (see field stack)
    slog.String("stack", string(debug.Stack())),
)

Java (SLF4J and Logback with logstash-logback-encoder)

try {
    throw new RuntimeException("boom!");
} catch (Exception e) {
    logger.atError()
            .setCause(e) // The error stack trace is rendered in a single log message (see field stack_trace)
            .log("An error occurred");
}

Function workloads handle multi-line logs, but each newline character results in a separate log entry. When logging stack traces with structured logging frameworks, you might need to manually escape newline characters to ensure they are rendered properly as a single log entry.

Logging fields

Code Engine logging fields.
Field name Description Example value
app The IBM Cloud service that emitted the platform log line. For Code Engine, this will always be codeengine. codeengine
tag Field set by fluentbit and derived from the input ID set in the fluentbit config. platform.<id>.codeengine
stream The output stream that received the log record. Possible values: stdout or stderr
label.Namespace The subdomain name of the Code Engine project. edf5a781
label.Project The name of the Code Engine project. User defined string
label.Stream The output stream that received the log record. Possible values: stdout or stderr
level Optional. Defines the severity of the log message. This value is only set, if the log level could be extracted from an unstructured log message. Values are case-insensitive. Possible values: fatal, error, warn, info, debug, trace
logtag Optional. Indicates whether the received log line is a partial or full log line. This field is not set for function workloads. Possible values: F or P
message.message The human-readable log message. String defined by system component or user
message.logSourceCRN The CRN of the Code Engine project. <code engine project CRN>
message.saveServiceCopy Defines whether the platform log line should be copied into IBM Cloud® Code Engine’s system logs, too. false
message.serviceName The name of the IBM Cloud service that emitted that log line. codeengine
message._app The instance name (for apps, jobs, and builds) or component name (for functions). my-app-0001-pod-abcde
message.* Optional. Meta information useful for the user to create dashboards or alerts. durationSeconds

Viewing logs from the console

When you work with Code Engine apps, jobs, functions, or builds in the console with logging enabled, logs are forwarded to an IBM Cloud Logs service where they are indexed, enabling full-text search through all generated messages and convenient querying based on specific fields.

The IBM Cloud Logs instance that receives platform logs does not have to be in the same region as your Code Engine project, and you are not required to create this instance before you work with your Code Engine component. You can add logging capabilities at any time from your Code Engine app, job, function, or build page in the console.

To generate logs for any platform service, you only need to enable logging one time per region, per account.

Considerations for viewing logs from the console

When you want to use logging from the console, you must first configure IBM Cloud Logs platform logs to receive Code Engine logging data with IBM Cloud Logs Routing. To check for active IBM Cloud Logs instances, see the Observability dashboard.

Review the IBM Cloud Logs service plan information as you consider retention, search, and log analysis needs.

When you view log data for Code Engine applications, runs of your job, or runs of your build, delays can occur before the data is available in IBM Cloud Logs. For example, it might take around 5 to 10 minutes for your log data to show in IBM Cloud Logs, especially if you are using the Store and search data pipeline.

Review the documentation on Data Pipelines to learn about options to balance log latency and cost for your IBM Cloud Logs instances.

When you use logging with the CLI, you do not need to configure IBM Cloud Logs platform logs, as the Code Engine CLI logging fetches its data differently.

The logging capabilities offered through the CLI are limited and should be considered for development purposes only. When you run production workloads, always use an IBM Cloud Logs instance, which offers log retention, filter, and search capabilities.

Can I apply filters on IBM Cloud Logs data?

You can modify and scope the filter to display log data at a specific level or a more granular level to a specific application revision, job run, or build run from the IBM Cloud Logs page, based on your needs.

  • If message.serviceName:"codeengine" is set, then only Code Engine logs are displayed.

  • If label.Project:'<project_name>' is set, then only logs from a specific project are displayed.

  • If message._app:'<your_component_name>' is set, then only logs from the specified component (application, job, or build) are displayed. If your Code Engine components share the same name, the filter includes logs from these components. For example,

    • The filter message.serviceName:"codeengine" AND message._app:"myapp" scopes the logs to the myapp application level.
    • The filter message.serviceName:"codeengine" AND message._app:"myapp\-00002" scopes the logs to the myapp-0002 application revision level.
    • The filter message.serviceName:"codeengine" AND message._app:"myjob" scopes the logs to the specific myjob job level.
    • The filter message.serviceName:"codeengine" AND message._app:"myjob\-jobrun\-t6m7l" scopes the logs to the specific myjob-jobrun-t6m7l job run level.
    • The filter message.serviceName:"codeengine" AND message._app:"mybuild" scopes the logs to the specific mybuild build level.
    • The filter message.serviceName:"codeengine" AND message._app:"mybuild\-run\-121212" scopes the logs to the specific mybuild-run-121212 build run level.

For more information about configuring and starting logging in the console, see viewing app, job, or function logs from the console.

Viewing app, job, or function logs from the console

You can view logs for apps, jobs, or functions. The steps to view any of these from the console are very similar.

After you select the project that you want to work with, you can add logging capabilities from the Code Engine Overview page or one of its child pages such as the Applications, Jobs, or Functions page; or from the page that is specific to your application, job, or function. The following steps assume that you are working from a specific Code Engine page.

  1. Go to an app, job, or function that you created and deployed. From the Projects page on the Code Engine console, select your project and then select Applications, Jobs, or Functions as appropriate. Select the app, job, or function with which you want to work.
  2. If you previously created an IBM Cloud Logs instance, click Logging, to open the IBM Cloud Logs service.
  3. To add and configure logging capabilities, complete the following steps:
    1. From the Test application, Submit job, or Test function options menu, click Add logging to create the IBM Cloud Logs instance. This action opens the IBM Cloud Logs service.
    2. From the IBM Cloud Logs service, create your logging instance. To confirm that your logging instance is created, check the Observability dashboard.
    3. From your Code Engine app, job, or function page, click Add logging from the Test application, Submit job, or Test function options menu. This time, select an IBM Cloud Logs instance to receive platform logs. Choose the logging instance that you created in the prior step. Click Select. Code Engine requires enabled platform logs to receive Code Engine logging data. When you complete this action, Code Engine enables platform logging for you.
  4. Now that platform logs are configured, from your Code Engine app, job, or function page, click Logging from the Test application, Submit job, or Test function options menu to open your platform logs window. To confirm that platform logs are set for your region, check the Observability dashboard.
  5. (optional) Refine the filter for your search, if needed.
  6. Verify your configuration by completing one of the following steps:
    • For an application or a function, test it: click Test application or Test function as appropriate, and then click Send request. To open the application or function in a web page, click Application URL or Function URL. You can view platform logs from the test in the platform logs window.
    • For a job, run it: from the Job runs area, click Submit job to run your job. Provide the job run configuration values or you can take the default values. Click Submit job to run your job. You can view platform logs from the job run in the platform logs window.

Your IBM Cloud Logs instance is now configured such that it can receive platform logging for your Code Engine app, job, or function.

Alternatively, you can configure an IBM Cloud Logs instance by using the Observability dashboard to create the instance, and then by configuring platform logs routing.

Viewing build logs from the console

You can display logs for specific build run instances from the console.

  1. Go to the Code Engine dashboard.
  2. Select a project or create one.
  3. From the project page, click Image builds.
  4. From the Image build tab, click the name of your image build to open the build page for a defined build, or create a build.
  5. From the build page for your defined build, click the name of the instance of your build run in the Build runs section. You might need to click Submit build to create a build run. You can view platform logs from the build run in the platform logs window. Alternatively, you can also view build log information for the build step details from the build run instance page. Expand the build steps for specific build step log data. You can optionally refine the filter for your search, if needed.

Viewing logs with the CLI

To view logging output with the CLI, you must have a running instance of your app or job. If an app is scaled to zero or a job run instance is completed, the output for the ibmcloud ce app logs and ibmcloud ce jobrun logs commands does not have log data. Alternatively, you can use the IBM Cloud Logs service to view log data.

Viewing application logs with the CLI

To view app logs for a specific app with the CLI, use the application logs command. You can display logs of all the instances of an app or display logs of a specific instance of an app. The app get command displays details about your app, including the running instances of the app.

  • To view the logs for all instances of the myapp app, specify the name of the app with the --app option. For example:

    ibmcloud ce app logs --app myapp
    

    Example output

    Getting logs for all instances of application 'myapp'...
    OK
    
    myapp-ii18y-2-deployment-7657c5f4f9-dgk5f:
    Server running at http://0.0.0.0:8080/
    
  • To view the logs for a specific instance of the app, specify the name of the specific instance of the app with the --instance option. For example:

    ibmcloud ce app logs --instance myapp-ii18y-2-deployment-7657c5f4f9-dgk5f
    

    Example output

    Getting logs for application instance 'myapp-a5yp2-2-deployment-65766594d4-hj6c5'...
    OK
    
    myapp-a5yp2-2-deployment-65766594d4-hj6c5:
    Server running at http://0.0.0.0:8080/
    

Viewing job logs with the CLI

To view logs for a specific job run with the CLI, use the jobrun logs command. You can display logs of all the instances of a job run or display logs of a specific instance of a job run. The jobrun get command displays details about your job run, including the instances of the job run.

  • To view the logs for all instances of the testjobrun job run, specify the name of the job run with the --jobrun option. For example:

    ibmcloud ce jobrun logs --jobrun testjobrun
    

    Example output

    Getting jobrun 'testjobrun'...
    Getting instances of jobrun 'testjobrun'...
    Getting logs for all instances of job run 'testjobrun'...
    OK
    
    testjobrun-1-0:
    Hello World!
    
    testjobrun-2-0:
    Hello World!
    
    testjobrun-3-0:
    Hello World!
    
    testjobrun-4-0:
    Hello World!
    
    testjobrun-5-0:
    Hello World!
    
  • To view the logs for the testjobrun-1-0 job run instance, specify the name of a specific instance of the job run with the --instance option. For example:

    ibmcloud ce jobrun logs --instance testjobrun-1-0
    

    Example output

    Getting logs for job run instance 'testjobrun-1-0'...
    OK
    
    testjobrun-1-0:
    Hello World!
    

Viewing build logs with the CLI

To view build logs for a specific build run with the CLI, use the buildrun logs command. You can display logs of all the instances of a build run based on the name of the build run.

To view the logs for all instances of the mybuildrun build run, specify the name of the build run with the --name option. For example:

ibmcloud ce buildrun logs --name mybuildrun

Example output

Getting build run 'mybuildrun'...
Getting instances of build run 'mybuildrun'...
Getting logs for build run 'mybuildrun'...
OK

mybuildrun-zg5rj-pod-z5gzb/step-git-source-source-r9fcf:
{"level":"info","ts":1614363665.8331757,"caller":"git/git.go:169","msg":"Successfully cloned https://github.com/IBM/CodeEngine @ 8b514ce871e50d67cfea3e344b90cade4bd26e90 (grafted, HEAD, origin/main) in path /workspace/source"}
{"level":"info","ts":1614363666.82988,"caller":"git/git.go:207","msg":"Successfully initialized and updated submodules in path /workspace/source"}

mybuildrun-zg5rj-pod-z5gzb/step-build-and-push:
INFO[0002] Retrieving image manifest node:12-alpine
INFO[0002] Retrieving image node:12-alpine
INFO[0003] Retrieving image manifest node:12-alpine
INFO[0003] Retrieving image node:12-alpine
INFO[0003] Built cross stage deps: map[]
INFO[0003] Retrieving image manifest node:12-alpine
INFO[0003] Retrieving image node:12-alpine
INFO[0004] Retrieving image manifest node:12-alpine
INFO[0004] Retrieving image node:12-alpine
INFO[0004] Executing 0 build triggers
INFO[0004] Unpacking rootfs as cmd RUN npm install requires it.
INFO[0008] RUN npm install
INFO[0008] Taking snapshot of full filesystem...
INFO[0010] cmd: /bin/sh
INFO[0010] args: [-c npm install]
INFO[0010] Running: [/bin/sh -c npm install]
npm WARN saveError ENOENT: no such file or directory, open '/package.json'
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN enoent ENOENT: no such file or directory, open '/package.json'
npm WARN !invalid#2 No description
npm WARN !invalid#2 No repository field.
npm WARN !invalid#2 No README data
npm WARN !invalid#2 No license field.

up to date in 0.267s
found 0 vulnerabilities

INFO[0011] Taking snapshot of full filesystem...
INFO[0011] COPY server.js .
INFO[0011] Taking snapshot of files...
INFO[0011] EXPOSE 8080
INFO[0011] cmd: EXPOSE
INFO[0011] Adding exposed port: 8080/tcp
INFO[0011] CMD [ "node", "server.js" ]

mybuildrun-zg5rj-pod-z5gzb/step-image-digest-exporter-ngl6j:
2021/02/26 18:21:02 warning: unsuccessful cred copy: ".docker" from "/tekton/creds" to "/tekton/home": unable to open destination: open /tekton/home/.docker/config.json: permission denied
{"severity":"INFO","timestamp":"2021-02-26T18:21:26.372494581Z","caller":"logging/config.go:116","message":"Successfully created the logger."}
{"severity":"INFO","timestamp":"2021-02-26T18:21:26.372621756Z","caller":"logging/config.go:117","message":"Logging level set to: info"}