IBM Cloud Docs
Using Go

Using Go

The IBM Cloud® Object Storage SDK for Go provides features to make the most of IBM Cloud Object Storage.

The IBM Cloud Object Storage SDK for Go is comprehensive, with many features and capabilities that exceed the scope and space of this guide. For detailed class and method documentation see the Go API documentation. Source code can be found in the GitHub repository.

Getting the SDK

Use go get to retrieve the SDK to add it to your GOPATH workspace, or project's Go module dependencies. The SDK requires a minimum version of Go 1.10 and maximum version of Go 1.12. Future versions of Go will be supported once our quality control process has been completed.

go get github.com/IBM/ibm-cos-sdk-go

To update the SDK use go get -u to retrieve the latest version of the SDK.

go get -u github.com/IBM/ibm-cos-sdk-go

Import packages

After you have installed the SDK, you will need to import the packages that you require into your Go applications to use the SDK, as shown in the following example:

import (
    "github.com/IBM/ibm-cos-sdk-go/aws/credentials/ibmiam"
    "github.com/IBM/ibm-cos-sdk-go/aws"
    "github.com/IBM/ibm-cos-sdk-go/aws/session"
    "github.com/IBM/ibm-cos-sdk-go/service/s3"
)

Creating a client and sourcing Service credentials

To connect to IBM Cloud Object Storage, a client is created and configured by providing credential information (API key and service instance ID). These values can also be automatically sourced from a credentials file or from environment variables.

The credentials can be found by creating a Service Credential, or through the CLI.

Figure 1 shows an example of how to define environment variables in an application runtime at the IBM Cloud Object Storage portal. The required variables are IBM_API_KEY_ID containing your Service Credential apikey, IBM_SERVICE_INSTANCE_ID holding the resource_instance_id also from your Service Credential, and an IBM_AUTH_ENDPOINT with a value appropriate to your account, like https://iam.cloud.ibm.com/identity/token. If using environment variables to define your application credentials, use WithCredentials(ibmiam.NewEnvCredentials(aws.NewConfig()))., replacing the similar method used in the configuration example.

Environment Variables
environment variables

If migrating from AWS S3, you can also source credentials data from ~/.aws/credentials in the format:

[default]
aws_access_key_id = {ACCESS_KEY}
aws_secret_access_key = {SECRET_ACCESS_KEY}

If both ~/.bluemix/cos_credentials and ~/.aws/credentials exist, cos_credentials takes preference.

Initializing configuration


// Constants for IBM COS values
const (
    apiKey            = "<API_KEY>"  // eg "0viPHOY7LbLNa9eLftrtHPpTjoGv6hbLD1QalRXikliJ"
    serviceInstanceID = "<RESOURCE_INSTANCE_ID>" // eg "crn:v1:bluemix:public:cloud-object-storage:global:a/3bf0d9003xxxxxxxxxx1c3e97696b71c:d6f04d83-6c4f-4a62-a165-696756d63903::"
    authEndpoint      = "https://iam.cloud.ibm.com/identity/token"
    serviceEndpoint   = "<SERVICE_ENDPOINT>" // eg "https://s3.us.cloud-object-storage.appdomain.cloud"
    bucketLocation    = "<LOCATION>" // eg "us"
)

// Create config

conf := aws.NewConfig().
    WithRegion("us-standard").
    WithEndpoint(serviceEndpoint).
    WithCredentials(ibmiam.NewStaticCredentials(aws.NewConfig(), authEndpoint, apiKey, serviceInstanceID)).
    WithS3ForcePathStyle(true)

Creating a client and sourcing Trusted Profile credentials

A client can be created by provding service credentials or trusted profile credentials. This section provides information to create a client using trusted profile credentials.

To connect to IBM Cloud Object Storage, a client is created and can also be configured by providing trusted profile credential information (Trusted Profile Id and CR Token file path). These values can also be automatically sourced from environment variables.

To create a Trusted Profile, establishing trust with compute resources based on specific attributes, and to define a policy to assign access to resources, see Managing access for apps in compute resources.

To learn more about establishing trust with a Kubernetes cluster, see Using Trusted Profiles in your Kubernetes and OpenShift Clusters

GO SDK supports authentication using trusted profile only in kubernetes and openshift clusters.

Trusted profile credentials can be set as environment variables during application runtime. The required variables are TRUSTED_PROFILE_ID containing your Trusted profile Id trusted profile id, CR_TOKEN_FILE_PATH holding the service account token file path, IBM_SERVICE_INSTANCE_ID holding the resource_instance_id from your Service Credential, and an IBM_AUTH_ENDPOINT with a value appropriate to your account, like https://iam.cloud.ibm.com/identity/token. If using environment variables to define your application credentials, use WithCredentials(ibmiam.NewEnvCredentials(aws.NewConfig()))., replacing the similar method used in the configuration example.

Initializing configuration


// Constants for IBM COS values
const (
    trustedProfileID  = "<TRUSTED_PROFILE_ID>"  // eg "Profile-5790481a-8fc5-46a4-bae3-d0e64ff6e0ad"
    crTokenFilePath   = "<SERVICE_ACCOUNT_TOKEN_FILE_PATH>" // "/var/run/secrets/tokens/service-account-token"
    serviceInstanceID = "<RESOURCE_INSTANCE_ID>" // "crn:v1:bluemix:public:cloud-object-storage:global:a/<CREDENTIAL_ID_AS_GENERATED>:<SERVICE_ID_AS_GENERATED>::"
    authEndpoint      = "https://iam.cloud.ibm.com/identity/token"
    serviceEndpoint   = "<SERVICE_ENDPOINT>" // eg "https://s3.us.cloud-object-storage.appdomain.cloud"
    bucketLocation    = "<LOCATION>" // eg "us-standard"
)

// Create config
conf := aws.NewConfig().
    WithRegion(bucketLocation).
    WithEndpoint(serviceEndpoint).
    WithCredentials(ibmiam.NewTrustedProfileCredentialsCR(aws.NewConfig(), authEndpoint, trustedProfileID, crtokenFilePath, serviceInstanceID)).
    WithS3ForcePathStyle(true)

Both API-Key and Trusted-Profile-Id can't be set as environmental variables. Only one of them should be set, otherwise GO sdk throws an error.

For more information about endpoints, see Endpoints and storage locations.

Code Examples

Creating a new bucket

A list of valid provisioning codes for LocationConstraint can be referenced in the Storage Classes guide. Please note that the sample uses the appropriate location constraint for the Cold Vault storage based on the sample configuration. Your locations and configuration may vary.

func main() {

    // Create client
    sess := session.Must(session.NewSession())
    client := s3.New(sess, conf)

    // Bucket Names
    newBucket := "<NEW_BUCKET_NAME>"
    newColdBucket := "<NEW_COLD_BUCKET_NAME>"
    input := &s3.CreateBucketInput{
        Bucket: aws.String(newBucket),
    }
    client.CreateBucket(input)

    input2 := &s3.CreateBucketInput{
        Bucket: aws.String(newColdBucket),
        CreateBucketConfiguration: &s3.CreateBucketConfiguration{
            LocationConstraint: aws.String("us-cold"),
        },
    }
    client.CreateBucket(input2)

    d, _ := client.ListBuckets(&s3.ListBucketsInput{})
    fmt.Println(d)
}

List available buckets

func main() {

    // Create client
    sess := session.Must(session.NewSession())
    client := s3.New(sess, conf)

    // Call Function
    d, _ := client.ListBuckets(&s3.ListBucketsInput{})
    fmt.Println(d)
}

Upload an object to a bucket

func main() {

    // Create client
    sess := session.Must(session.NewSession())
    client := s3.New(sess, conf)

    // Variables and random content to sample, replace when appropriate
    bucketName := "<BUCKET_NAME>"
    key := "<OBJECT_KEY>"
    content := bytes.NewReader([]byte("<CONTENT>"))

    input := s3.PutObjectInput{
        Bucket:        aws.String(bucketName),
        Key:           aws.String(key),
        Body:          content,
    }

    // Call Function to upload (Put) an object
    result, _ := client.PutObject(&input)
    fmt.Println(result)
}

List items in a bucket (List Objects V2)

func main() {

    // Create client
    sess := session.Must(session.NewSession())
    client := s3.New(sess, conf)

    // Bucket Name
    Bucket := "<BUCKET_NAME>"

    // Call Function
    Input := &s3.ListObjectsV2Input{
            Bucket: aws.String(Bucket),
        }

    l, e := client.ListObjectsV2(Input)
    fmt.Println(l)
    fmt.Println(e) // prints "<nil>"
}

// The response should be formatted like the following example:
//{
// 	Contents: [{
// 		ETag: "\"dbxxxxx53xxx7d06378204e3xxxxxx9f\"",
// 		Key: "file1.json",
// 		LastModified: 2019-10-15 22:22:52.62 +0000 UTC,
// 		Size: 1045,
// 		StorageClass: "STANDARD"
// 	  },{
// 		ETag: "\"6e1xxxxx63xxxdefb440f72axxxxxxc2\"",
// 		Key: "file2.json",
// 		LastModified: 2019-10-15 23:08:10.074 +0000 UTC,
// 		Size: 1045,
// 		StorageClass: "STANDARD"
// 	  }],
// 	Delimiter: "",
// 	IsTruncated: false,
// 	KeyCount: 2,
// 	MaxKeys: 1000,
// 	Name: "<BUCKET_NAME>",
// 	Prefix: ""
//}

Get an object's contents

func main() {
    // Create client
    sess := session.Must(session.NewSession())
    client := s3.New(sess, conf)

    // Variables
    bucketName := "<NEW_BUCKET_NAME>"
    key := "<OBJECT_KEY>"

    // users will need to create bucket, key (flat string name)
    Input := s3.GetObjectInput{
        Bucket: aws.String(bucketName),
        Key:    aws.String(key),
    }

    // Call Function
    res, _ := client.GetObject(&Input)

    body, _ := ioutil.ReadAll(res.Body)
    fmt.Println(body)
}

Delete an object from a bucket

func main() {
    // Create client
    sess := session.Must(session.NewSession())
    client := s3.New(sess, conf)
    // Bucket Name
    bucket := "<BUCKET_NAME>"
    input := &s3.DeleteObjectInput{
        Bucket: aws.String(bucket),
        Key:    aws.String("<OBJECT_KEY>"),
    }
    d, _ := client.DeleteObject(input)
    fmt.Println(d)
}

Delete multiple objects from a bucket

func main() {

    // Create client
    sess := session.Must(session.NewSession())
    client := s3.New(sess, conf)

    // Bucket Name
    bucket := "<BUCKET_NAME>"

    input := &s3.DeleteObjectsInput{
        Bucket: aws.String(bucket),
        Delete: &s3.Delete{
            Objects: []*s3.ObjectIdentifier{
                {
                    Key: aws.String("<OBJECT_KEY1>"),
                },
                {
                    Key: aws.String("<OBJECT_KEY2>"),
                },
                {
                    Key: aws.String("<OBJECT_KEY3>"),
                },
            },
            Quiet: aws.Bool(false),
        },
    }

    d, _ := client.DeleteObjects(input)
    fmt.Println(d)
}

Delete a bucket

func main() {

    // Bucket Name
    bucket := "<BUCKET_NAME>"

    // Create client
    sess := session.Must(session.NewSession())
    client := s3.New(sess, conf)

    input := &s3.DeleteBucketInput{
        Bucket: aws.String(bucket),
    }
    d, _ := client.DeleteBucket(input)
    fmt.Println(d)
}

Run a manual multi-part upload

func main() {

    // Variables
    bucket := "<BUCKET_NAME>"
    key := "<OBJECT_KEY>"
    content := bytes.NewReader([]byte("<CONTENT>"))

    input := s3.CreateMultipartUploadInput{
        Bucket: aws.String(bucket),
        Key:    aws.String(key),
    }

    // Create client
    sess := session.Must(session.NewSession())
    client := s3.New(sess, conf)
    upload, _ := client.CreateMultipartUpload(&input)

    uploadPartInput := s3.UploadPartInput{
        Bucket:     aws.String(bucket),
        Key:        aws.String(key),
        PartNumber: aws.Int64(int64(1)),
        UploadId:   upload.UploadId,
        Body:          content,
    }

    var completedParts []*s3.CompletedPart
    completedPart, _ := client.UploadPart(&uploadPartInput)

    completedParts = append(completedParts, &s3.CompletedPart{
        ETag:       completedPart.ETag,
        PartNumber: aws.Int64(int64(1)),
    })

    completeMPUInput := s3.CompleteMultipartUploadInput{
        Bucket: aws.String(bucket),
        Key:    aws.String(key),
        MultipartUpload: &s3.CompletedMultipartUpload{
            Parts: completedParts,
        },
        UploadId: upload.UploadId,
    }

    d, _ := client.CompleteMultipartUpload(&completeMPUInput)
    fmt.Println(d)
}

Using Key Protect

Key Protect can be added to a storage bucket to manage encryption keys. All data is encrypted in IBM COS, but Key Protect provides a service for generating, rotating, and controlling access to encryption keys by using a centralized service.

Before You Begin

The following items are necessary to create a bucket with Key-Protect enabled:

Retrieving the Root Key CRN

  1. Retrieve the instance ID for your Key Protect service
  2. Use the Key Protect API to retrieve all your available keys
  3. Retrieve the CRN of the root key you use to enabled Key Protect on your bucket. The CRN looks similar to below:

crn:v1:bluemix:public:kms:us-south:a/3d624cd74a0dea86ed8efe3101341742:90b6a1db-0fe1-4fe9-b91e-962c327df531:key:0bg3e33e-a866-50f2-b715-5cba2bc93234

Creating a bucket with Key Protect enabled

func main() {

    // Create client
    sess := session.Must(session.NewSession())
    client := s3.New(sess, conf)

    // Bucket Names
    newBucket := "<NEW_BUCKET_NAME>"
    fmt.Println("Creating new encrypted bucket:", newBucket)

    input := &s3.CreateBucketInput{
        Bucket: aws.String(newBucket),
        IBMSSEKPCustomerRootKeyCrn: aws.String("<ROOT-KEY-CRN>"),
        IBMSSEKPEncryptionAlgorithm:aws.String("<ALGORITHM>"),
    }
    client.CreateBucket(input)

    // List Buckets
    d, _ := client.ListBuckets(&s3.ListBucketsInput{})
    fmt.Println(d)
}

Key Values

  • <NEW_BUCKET_NAME> - The name of the new bucket.
  • <ROOT-KEY-CRN> - CRN of the Root Key that is obtained from the Key Protect service.
  • <ALGORITHM> - The encryption algorithm that is used for new objects added to the bucket (Default is AES256).

Use the transfer manager

func main() {

    // Variables
    bucket := "<BUCKET_NAME>"
    key := "<OBJECT_KEY>"

    // Create client
    sess := session.Must(session.NewSession())
    client := s3.New(sess, conf)

    // Create an uploader with S3 client and custom options
    uploader := s3manager.NewUploaderWithClient(client, func(u *s3manager.Uploader) {
        u.PartSize = 5 * 1024 * 1024 // 64MB per part
    })

    // make a buffer of 5MB
    buffer := make([]byte, 15*1024*1024, 15*1024*1024)
    random := rand.New(rand.NewSource(time.Now().Unix()))
    random.Read(buffer)

    input := &s3manager.UploadInput{
        Bucket: aws.String(bucket),
        Key:    aws.String(key),
        Body:   io.ReadSeeker(bytes.NewReader(buffer)),
    }

    // Perform an upload.
    d, _ := uploader.Upload(input)
    fmt.Println(d)
    // Perform upload with options different than the those in the Uploader.
    f, _ := uploader.Upload(input, func(u *s3manager.Uploader) {
        u.PartSize = 10 * 1024 * 1024 // 10MB part size
        u.LeavePartsOnError = true    // Don't delete the parts if the upload fails.
    })
    fmt.Println(f)
}

Getting an extended listing

func main() {
// Create client
        sess := session.Must(session.NewSession())
        client := s3.New(sess, conf)

        input := new(s3.ListBucketsExtendedInput).SetMaxKeys(<MAX_KEYS>).SetMarker("<MARKER>").SetPrefix("<PREFIX>")
        output, _ := client.ListBucketsExtended(input)

        jsonBytes, _ := json.MarshalIndent(output, " ", " ")
        fmt.Println(string(jsonBytes))
}

Key Values

  • <MAX_KEYS> - Maximum number of buckets to retrieve in the request.
  • <MARKER> - The bucket name to start the listing (Skip until this bucket).
  • <PREFIX - Only include buckets whose name start with this prefix.

Getting an extended listing with pagination

func main() {

	// Create client
	sess := session.Must(session.NewSession())
	client := s3.New(sess, conf)

    i := 0
    input := new(s3.ListBucketsExtendedInput).SetMaxKeys(<MAX_KEYS>).SetMarker("<MARKER>").SetPrefix("<PREFIX>")
	output, _ := client.ListBucketsExtended(input)

	for _, bucket := range output.Buckets {
		fmt.Println(i, "\t\t", *bucket.Name, "\t\t", *bucket.LocationConstraint, "\t\t", *bucket.CreationDate)
	}

}

Key Values

  • <MAX_KEYS> - Maximum number of buckets to retrieve in the request.
  • <MARKER> - The bucket name to start the listing (Skip until this bucket).
  • <PREFIX - Only include buckets whose name start with this prefix.

Archive Tier Support

You can automatically archive objects after a specified length of time or after a specified date. Once archived, a temporary copy of an object can be restored for access as needed. Please note the time required to restore the temporary copy of the object(s) may take up to 12 hours.

To use the example provided, provide your own configuration—including replacing <apikey> and other bracketed <...> information, while keeping in mind that using environment variables are more secure, and you should not put credentials in code that will be versioned.

An archive policy is set at the bucket level by calling the PutBucketLifecycleConfiguration method on a client instance. A newly added or modified archive policy applies to new objects uploaded and does not affect existing objects.


func main() {

	// Create Client
	sess := session.Must(session.NewSession())
	client := s3.New(sess, conf)

	// PUT BUCKET LIFECYCLE CONFIGURATION
	// Replace <BUCKET_NAME> with the name of the bucket
	lInput := &s3.PutBucketLifecycleConfigurationInput{
		Bucket: aws.String("<BUCKET_NAME>"),
		LifecycleConfiguration: &s3.LifecycleConfiguration{
			Rules: []*s3.LifecycleRule{
				{
					Status: aws.String("Enabled"),
					Filter: &s3.LifecycleRuleFilter{},
					ID:     aws.String("id3"),
					Transitions: []*s3.Transition{
						{
							Days:         aws.Int64(5),
							StorageClass: aws.String("Glacier"),
						},
					},
				},
			},
		},
	}
	l, e := client.PutBucketLifecycleConfiguration(lInput)
	fmt.Println(l) // should print an empty bracket
	fmt.Println(e) // should print <nil>

	// GET BUCKET LIFECYCLE CONFIGURATION
	gInput := &s3.GetBucketLifecycleConfigurationInput{
		Bucket: aws.String("<bucketname>"),
	}
	g, e := client.GetBucketLifecycleConfiguration(gInput)
	fmt.Println(g)
	fmt.Println(e) // see response for results

    // RESTORE OBJECT
    // Replace <OBJECT_KEY> with the appropriate key
    rInput := &s3.RestoreObjectInput{
        Bucket: aws.String("<BUCKET_NAME>"),
        Key:    aws.String("<OBJECT_KEY>"),
        RestoreRequest: &s3.RestoreRequest{
            Days: aws.Int64(100),
            GlacierJobParameters: &s3.GlacierJobParameters{
                Tier: aws.String("Bulk"),
            },
        },
    }
    r, e := client.RestoreObject(rInput)
    fmt.Println(r)
    fmt.Println(e)

}

The typical response is exemplified here.

 {
   Rules: [{
       Filter: {

       },
       ID: "id3",
       Status: "Enabled",
       Transitions: [{
           Days: 5,
           StorageClass: "GLACIER"
         }]
     }]
 }

Immutable Object Storage

Users can configure buckets with an Immutable Object Storage policy to prevent objects from being modified or deleted for a defined period of time. The retention period can be specified on a per-object basis, or objects can inherit a default retention period set on the bucket. It is also possible to set open-ended and permanent retention periods. Immutable Object Storage meets the rules set forth by the SEC governing record retention, and IBM Cloud administrators are unable to bypass these restrictions.

Immutable Object Storage does not support Aspera transfers via the SDK to upload objects or directories at this stage.

func main() {

	// Create Client
	sess := session.Must(session.NewSession())
	client := s3.New(sess, conf)

	// Create a bucket
	input := &s3.CreateBucketInput{
		Bucket: aws.String("<BUCKET_NAME>"),
	}
	d, e := client.CreateBucket(input)
	fmt.Println(d) // should print an empty bracket
	fmt.Println(e) // should print <nil>

	// PUT BUCKET PROTECTION CONFIGURATION
	pInput := &s3.PutBucketProtectionConfigurationInput{
		Bucket: aws.String("<BUCKET_NAME>"),
		ProtectionConfiguration: &s3.ProtectionConfiguration{
			DefaultRetention: &s3.BucketProtectionDefaultRetention{
				Days: aws.Int64(100),
			},
			MaximumRetention: &s3.BucketProtectionMaximumRetention{
				Days: aws.Int64(1000),
			},
			MinimumRetention: &s3.BucketProtectionMinimumRetention{
				Days: aws.Int64(10),
			},
			Status: aws.String("Retention"),
		},
	}
	p, e := client.PutBucketProtectionConfiguration(pInput)
	fmt.Println(p)
	fmt.Println(e) // see response for results

	// GET BUCKET PROTECTION CONFIGURATION
	gInput := &s3.GetBucketProtectionConfigurationInput{
		Bucket: aws.String("<BUCKET_NAME>"),
	}
	g, e := client.GetBucketProtectionConfiguration(gInput)
	fmt.Println(g)
	fmt.Println(e)
}

The typical response is exemplified here.

 {
   ProtectionConfiguration: {
     DefaultRetention: {
       Days: 100
     },
     MaximumRetention: {
       Days: 1000
     },
     MinimumRetention: {
       Days: 10
     },
     Status: "COMPLIANCE"
   }
 }

Create a hosted static website

This operation requires permissions, as only the bucket owner is typically permitted to configure a bucket to host a static website. The parameters determine the default suffix for visitors to the site as well as an optional error document included here to complete the example.

func main() {

	// Create Client
	sess := session.Must(session.NewSession())
	client := s3.New(sess, conf)

	// Create a bucket
	input := &s3.CreateBucketInput{
		Bucket: aws.String("<BUCKET_NAME>"),
	}
	d, e := client.CreateBucket(input)
	fmt.Println(d) // should print an empty bracket
	fmt.Println(e) // should print <nil>

	// PUT BUCKET WEBSITE
	pInput := s3.PutBucketWebsiteInput{
        Bucket: input,
        WebsiteConfiguration: &s3.WebsiteConfiguration{
            IndexDocument: &s3.IndexDocument{
                Suffix: aws.String("index.html"),
            },
        },
    }

    pInput.WebsiteConfiguration.ErrorDocument = &s3.ErrorDocument{
        Key: aws.String("error.html"),
    }

    p, e := client.PutBucketWebsite(&params)
	fmt.Println(p)
	fmt.Println(e) // see response for results

}

Next Steps

If you haven't already, please see the detailed class and method documentation available at the Go API documentation.