Introduction

IBM Watson™ Visual Recognition is discontinued. Existing instances are supported until 1 December 2021, but as of 7 January 2021, you can't create instances. Any instance that is provisioned on 1 December 2021 will be deleted.

Provide images to the IBM Watson Visual Recognition service for analysis. The service detects objects based on a set of images with training data.

This documentation describes Java SDK major version 9. For more information about how to update your code from the previous version, see the migration guide.

This documentation describes Node SDK major version 6. For more information about how to update your code from the previous version, see the migration guide.

This documentation describes Python SDK major version 5. For more information about how to update your code from the previous version, see the migration guide.

This documentation describes Ruby SDK major version 2. For more information about how to update your code from the previous version, see the migration guide.

This documentation describes .NET Standard SDK major version 5. For more information about how to update your code from the previous version, see the migration guide.

This documentation describes Go SDK major version 2. For more information about how to update your code from the previous version, see the migration guide.

This documentation describes Swift SDK major version 4. For more information about how to update your code from the previous version, see the migration guide.

This documentation describes Unity SDK major version 5. For more information about how to update your code from the previous version, see the migration guide.

The IBM Watson Unity SDK has the following requirements.

  • The SDK requires Unity version 2018.2 or later to support Transport Layer Security (TLS) 1.2.
    • Set the project settings for both the Scripting Runtime Version and the Api Compatibility Level to .NET 4.x Equivalent.
    • For more information, see TLS 1.0 support.
  • The SDK doesn't support the WebGL projects. Change your build settings to any platform except WebGL.

For more information about how to install and configure the SDK and SDK Core, see https://github.com/watson-developer-cloud/unity-sdk.

The code examples on this tab use the client library that is provided for Java.

Maven

<dependency>
  <groupId>com.ibm.watson</groupId>
  <artifactId>ibm-watson</artifactId>
  <version>9.3.0</version>
</dependency>

Gradle

compile 'com.ibm.watson:ibm-watson:9.3.0'

GitHub

The code examples on this tab use the client library that is provided for Node.js.

Installation

npm install ibm-watson@^6.2.0

GitHub

The code examples on this tab use the client library that is provided for Python.

Installation

pip install --upgrade "ibm-watson>=5.3.0"

GitHub

The code examples on this tab use the client library that is provided for Ruby.

Installation

gem install ibm_watson

GitHub

The code examples on this tab use the client library that is provided for Go.

go get -u github.com/watson-developer-cloud/go-sdk/v2@v2.2.0

GitHub

The code examples on this tab use the client library that is provided for Swift.

Cocoapods

pod 'IBMWatsonVisualRecognitionV4', '~> 4.3.0'

Carthage

github "watson-developer-cloud/swift-sdk" ~> 4.3.0

Swift Package Manager

.package(url: "https://github.com/watson-developer-cloud/swift-sdk", from: "4.3.0")

GitHub

The code examples on this tab use the client library that is provided for .NET Standard.

Package Manager

Install-Package IBM.Watson.VisualRecognition.v4 -Version 5.3.0

.NET CLI

dotnet add package IBM.Watson.VisualRecognition.v4 --version 5.3.0

PackageReference

<PackageReference Include="IBM.Watson.VisualRecognition.v4" Version="5.3.0" />

GitHub

The code examples on this tab use the client library that is provided for Unity.

GitHub

Endpoint URLs

Identify the base URL for your service instance.

IBM Cloud URLs

The base URLs come from the service instance. To find the URL, view the service credentials by clicking the name of the service in the Resource list. Use the value of the URL. Add the method to form the complete API endpoint for your request.

The following example URL represents a Visual Recognition instance that is hosted in Frankfurt:

https://api.eu-de.visual-recognition.watson.cloud.ibm.com/instances/6bbda3b3-d572-45e1-8c54-22d6ed9e52c2

The following URLs represent the base URLs for Visual Recognition. When you call the API, use the URL that corresponds to the location of your service instance.

  • Dallas: https://api.us-south.visual-recognition.watson.cloud.ibm.com
  • Frankfurt: https://api.eu-de.visual-recognition.watson.cloud.ibm.com
  • Seoul: https://api.kr-seo.visual-recognition.watson.cloud.ibm.com

Set the correct service URL by calling the setServiceUrl() method of the service instance.

Set the correct service URL by specifying the serviceUrl parameter when you create the service instance.

Set the correct service URL by calling the set_service_url() method of the service instance.

Set the correct service URL by specifying the service_url property of the service instance.

Set the correct service URL by calling the SetServiceURL() method of the service instance.

Set the correct service URL by setting the serviceURL property of the service instance.

Set the correct service URL by calling the SetServiceUrl() method of the service instance.

Set the correct service URL by calling the SetServiceUrl() method of the service instance.

Dallas API endpoint example for services managed on IBM Cloud

curl -X {request_method} -u "apikey:{apikey}" "https://api.us-south.visual-recognition.watson.cloud.ibm.com/instances/{instance_id}"

Your service instance might not use this URL

Default URL

https://api.us-south.visual-recognition.watson.cloud.ibm.com

Example for the Frankfurt location

IamAuthenticator authenticator = new IamAuthenticator("{apikey}");
VisualRecognition visualRecognition = new VisualRecognition("{version}", authenticator);
visualRecognition.setServiceUrl("https://api.eu-de.visual-recognition.watson.cloud.ibm.com");

Default URL

https://api.us-south.visual-recognition.watson.cloud.ibm.com

Example for the Frankfurt location

const VisualRecognitionV4 = require('ibm-watson/visual-recognition/v4');
const { IamAuthenticator } = require('ibm-watson/auth');

const visualRecognition = new VisualRecognitionV4({
  version: '{version}',
  authenticator: new IamAuthenticator({
    apikey: '{apikey}',
  }),
  serviceUrl: 'https://api.eu-de.visual-recognition.watson.cloud.ibm.com',
});

Default URL

https://api.us-south.visual-recognition.watson.cloud.ibm.com

Example for the Frankfurt location

from ibm_watson import VisualRecognitionV4
from ibm_cloud_sdk_core.authenticators import IAMAuthenticator

authenticator = IAMAuthenticator('{apikey}')
visual_recognition = VisualRecognitionV4(
    version='{version}',
    authenticator=authenticator
)

visual_recognition.set_service_url('https://api.eu-de.visual-recognition.watson.cloud.ibm.com')

Default URL

https://api.us-south.visual-recognition.watson.cloud.ibm.com

Example for the Frankfurt location

require "ibm_watson/authenticators"
require "ibm_watson/visual_recognition_v4"
include IBMWatson

authenticator = Authenticators::IamAuthenticator.new(
  apikey: "{apikey}"
)
visual_recognition = VisualRecognitionV4.new(
  version: "{version}",
  authenticator: authenticator
)
visual_recognition.service_url = "https://api.eu-de.visual-recognition.watson.cloud.ibm.com"

Default URL

https://api.us-south.visual-recognition.watson.cloud.ibm.com

Example for the Frankfurt location

visualRecognition, visualRecognitionErr := visualrecognitionv4.NewVisualRecognitionV4(options)

if visualRecognitionErr != nil {
  panic(visualRecognitionErr)
}

visualRecognition.SetServiceURL("https://api.eu-de.visual-recognition.watson.cloud.ibm.com")

Default URL

https://api.us-south.visual-recognition.watson.cloud.ibm.com

Example for the Frankfurt location

let authenticator = WatsonIAMAuthenticator(apiKey: "{apikey}")
let visualRecognition = VisualRecognition(version: "{version}", authenticator: authenticator)
visualRecognition.serviceURL = "https://api.eu-de.visual-recognition.watson.cloud.ibm.com"

Default URL

https://api.us-south.visual-recognition.watson.cloud.ibm.com

Example for the Frankfurt location

IamAuthenticator authenticator = new IamAuthenticator(
    apikey: "{apikey}"
    );

VisualRecognitionService visualRecognition = new VisualRecognitionService("{version}", authenticator);
visualRecognition.SetServiceUrl("https://api.eu-de.visual-recognition.watson.cloud.ibm.com");

Default URL

https://api.us-south.visual-recognition.watson.cloud.ibm.com

Example for the Frankfurt location

var authenticator = new IamAuthenticator(
    apikey: "{apikey}"
);

while (!authenticator.CanAuthenticate())
    yield return null;

var visualRecognition = new VisualRecognitionService("{version}", authenticator);
visualRecognition.SetServiceUrl("https://api.eu-de.visual-recognition.watson.cloud.ibm.com");

Disabling SSL verification

All Watson services use Secure Sockets Layer (SSL) (or Transport Layer Security (TLS)) for secure connections between the client and server. The connection is verified against the local certificate store to ensure authentication, integrity, and confidentiality.

If you use a self-signed certificate, you need to disable SSL verification to make a successful connection.

Enabling SSL verification is highly recommended. Disabling SSL jeopardizes the security of the connection and data. Disable SSL only if necessary, and take steps to enable SSL as soon as possible.

To disable SSL verification for a curl request, use the --insecure (-k) option with the request.

To disable SSL verification, create an HttpConfigOptions object and set the disableSslVerification property to true. Then, pass the object to the service instance by using the configureClient method.

To disable SSL verification, set the disableSslVerification parameter to true when you create the service instance.

To disable SSL verification, specify True on the set_disable_ssl_verification method for the service instance.

To disable SSL verification, set the disable_ssl_verification parameter to true in the configure_http_client() method for the service instance.

To disable SSL verification, call the DisableSSLVerification method on the service instance.

To disable SSL verification, call the disableSSLVerification() method on the service instance. You cannot disable SSL verification on Linux.

To disable SSL verification, set the DisableSslVerification method to true on the service instance.

To disable SSL verification, set the DisableSslVerification method to true on the service instance.

Example to disable SSL verification. Replace {apikey} and {url} with your service credentials.

curl -k -X {request_method} -u "apikey:{apikey}" "{url}/{method}"

Example to disable SSL verification

IamAuthenticator authenticator = new IamAuthenticator("{apikey}");
VisualRecognition visualRecognition = new VisualRecognition("{version}", authenticator);
visualRecognition.setServiceUrl("{url}");

HttpConfigOptions configOptions = new HttpConfigOptions.Builder()
  .disableSslVerification(true)
  .build();
visualRecognition.configureClient(configOptions);

Example to disable SSL verification

const VisualRecognitionV4 = require('ibm-watson/visual-recognition/v4');
const { IamAuthenticator } = require('ibm-watson/auth');

const visualRecognition = new VisualRecognitionV4({
  version: '{version}',
  authenticator: new IamAuthenticator({
    apikey: '{apikey}',
  }),
  serviceUrl: '{url}',
  disableSslVerification: true,
});

Example to disable SSL verification

from ibm_watson import VisualRecognitionV4
from ibm_cloud_sdk_core.authenticators import IAMAuthenticator

authenticator = IAMAuthenticator('{apikey}')
visual_recognition = VisualRecognitionV4(
    version='{version}',
    authenticator=authenticator
)

visual_recognition.set_service_url('{url}')

visual_recognition.set_disable_ssl_verification(True)

Example to disable SSL verification

require "ibm_watson/authenticators"
require "ibm_watson/visual_recognition_v4"
include IBMWatson

authenticator = Authenticators::IamAuthenticator.new(
  apikey: "{apikey}"
)
visual_recognition = VisualRecognitionV4.new(
  version: "{version}",
  authenticator: authenticator
)
visual_recognition.service_url = "{url}"

visual_recognition.configure_http_client(disable_ssl_verification: true)

Example to disable SSL verification

visualRecognition, visualRecognitionErr := visualrecognitionv4.NewVisualRecognitionV4(options)

if visualRecognitionErr != nil {
  panic(visualRecognitionErr)
}

visualRecognition.SetServiceURL("{url}")

visualRecognition.DisableSSLVerification()

Example to disable SSL verification

let authenticator = WatsonIAMAuthenticator(apiKey: "{apikey}")
let visualRecognition = VisualRecognition(version: "{version}", authenticator: authenticator)
visualRecognition.serviceURL = "{url}"

visualRecognition.disableSSLVerification()

Example to disable SSL verification

IamAuthenticator authenticator = new IamAuthenticator(
    apikey: "{apikey}"
    );

VisualRecognitionService visualRecognition = new VisualRecognitionService("{version}", authenticator);
visualRecognition.SetServiceUrl("{url}");

visualRecognition.DisableSslVerification(true);

Example to disable SSL verification

var authenticator = new IamAuthenticator(
    apikey: "{apikey}"
);

while (!authenticator.CanAuthenticate())
    yield return null;

var visualRecognition = new VisualRecognitionService("{version}", authenticator);
visualRecognition.SetServiceUrl("{url}");

visualRecognition.DisableSslVerification = true;

Authentication

You authenticate to the API by using IBM Cloud Identity and Access Management (IAM).

You can pass either a bearer token in an authorization header or an API key. Tokens support authenticated requests without embedding service credentials in every call. API keys use basic authentication. For more information, see Authenticating to Watson services.

  • For testing and development, you can pass an API key directly.
  • For production use, unless you use the Watson SDKs, use an IAM token.

If you pass in an API key, use apikey for the username and the value of the API key as the password. For example, if the API key is f5sAznhrKQyvBFFaZbtF60m5tzLbqWhyALQawBg5TjRI in the service credentials, include the credentials in your call like this:

curl -u "apikey:f5sAznhrKQyvBFFaZbtF60m5tzLbqWhyALQawBg5TjRI"

For IBM Cloud instances, the SDK provides initialization methods for each form of authentication.

  • Use the API key to have the SDK manage the lifecycle of the access token. The SDK requests an access token, ensures that the access token is valid, and refreshes it if necessary.
  • Use the access token to manage the lifecycle yourself. You must periodically refresh the token.

For more information, see IAM authentication with the SDK.For more information, see IAM authentication with the SDK.For more information, see IAM authentication with the SDK.For more information, see IAM authentication with the SDK.For more information, see IAM authentication with the SDK.For more information, see IAM authentication with the SDK.For more information, see IAM authentication with the SDK.For more information, see IAM authentication with the SDK.

Replace {apikey} and {url} with your service credentials.

curl -X {request_method} -u "apikey:{apikey}" "{url}/v4/{method}"

SDK managing the IAM token. Replace {apikey}, {version}, and {url}.

IamAuthenticator authenticator = new IamAuthenticator("{apikey}");
VisualRecognition visualRecognition = new VisualRecognition("{version}", authenticator);
visualRecognition.setServiceUrl("{url}");

SDK managing the IAM token. Replace {apikey}, {version}, and {url}.

const VisualRecognitionV4 = require('ibm-watson/visual-recognition/v4');
const { IamAuthenticator } = require('ibm-watson/auth');

const visualRecognition = new VisualRecognitionV4({
  version: '{version}',
  authenticator: new IamAuthenticator({
    apikey: '{apikey}',
  }),
  serviceUrl: '{url}',
});

SDK managing the IAM token. Replace {apikey}, {version}, and {url}.

from ibm_watson import VisualRecognitionV4
from ibm_cloud_sdk_core.authenticators import IAMAuthenticator

authenticator = IAMAuthenticator('{apikey}')
visual_recognition = VisualRecognitionV4(
    version='{version}',
    authenticator=authenticator
)

visual_recognition.set_service_url('{url}')

SDK managing the IAM token. Replace {apikey}, {version}, and {url}.

require "ibm_watson/authenticators"
require "ibm_watson/visual_recognition_v4"
include IBMWatson

authenticator = Authenticators::IamAuthenticator.new(
  apikey: "{apikey}"
)
visual_recognition = VisualRecognitionV4.new(
  version: "{version}",
  authenticator: authenticator
)
visual_recognition.service_url = "{url}"

SDK managing the IAM token. Replace {apikey}, {version}, and {url}.

import (
  "github.com/IBM/go-sdk-core/core"
  "github.com/watson-developer-cloud/go-sdk/visualrecognitionv4"
)

func main() {
  authenticator := &core.IamAuthenticator{
    ApiKey: "{apikey}",
  }

  options := &visualrecognitionv4.VisualRecognitionV4Options{
    Version: "{version}",
    Authenticator: authenticator,
  }

  visualRecognition, visualRecognitionErr := visualrecognitionv4.NewVisualRecognitionV4(options)

  if visualRecognitionErr != nil {
    panic(visualRecognitionErr)
  }

  visualRecognition.SetServiceURL("{url}")
}

SDK managing the IAM token. Replace {apikey}, {version}, and {url}.

let authenticator = WatsonIAMAuthenticator(apiKey: "{apikey}")
let visualRecognition = VisualRecognition(version: "{version}", authenticator: authenticator)
visualRecognition.serviceURL = "{url}"

SDK managing the IAM token. Replace {apikey}, {version}, and {url}.

IamAuthenticator authenticator = new IamAuthenticator(
    apikey: "{apikey}"
    );

VisualRecognitionService visualRecognition = new VisualRecognitionService("{version}", authenticator);
visualRecognition.SetServiceUrl("{url}");

SDK managing the IAM token. Replace {apikey}, {version}, and {url}.

var authenticator = new IamAuthenticator(
    apikey: "{apikey}"
);

while (!authenticator.CanAuthenticate())
    yield return null;

var visualRecognition = new VisualRecognitionService("{version}", authenticator);
visualRecognition.SetServiceUrl("{url}");

Access between services

Your application might use more than one Watson service. You can grant access between services and you can grant access to more than one service for your applications.

For IBM Cloud services, the method to grant access between Watson services varies depending on the type of API key. For more information, see IAM access.

  • To grant access between IBM Cloud services, create an authorization between the services. For more information, see Granting access between services.
  • To grant access to your services by applications without using user credentials, create a service ID, add an API key, and assign access policies. For more information, see Creating and working with service IDs.

When you give a user ID access to multiple services, use an endpoint URL that includes the service instance ID (for example, https://api.us-south.visual-recognition.watson.cloud.ibm.com/instances/6bbda3b3-d572-45e1-8c54-22d6ed9e52c2). You can find the instance ID in two places:

  • By clicking the service instance row in the Resource list. The instance ID is the GUID in the details pane.
  • By clicking the name of the service instance in the list and looking at the credentials URL.

    If you don't see the instance ID in the URL, the credentials predate service IDs. Add new credentials from the Service credentials page and use those credentials.

Versioning

API requests require a version parameter that takes a date in the format version=YYYY-MM-DD. When we change the API in a backwards-incompatible way, we release a new version date.

Send the version parameter with every API request. The service uses the API version for the date you specify, or the most recent version before that date. Don't default to the current date. Instead, specify a date that matches a version that is compatible with your app, and don't change it until your app is ready for a later version.

Specify the version to use on API requests with the version parameter when you create the service instance. The service uses the API version for the date you specify, or the most recent version before that date. Don't default to the current date. Instead, specify a date that matches a version that is compatible with your app, and don't change it until your app is ready for a later version.

This documentation describes the current version of Visual Recognition, 2019-02-11. In some cases, differences in earlier versions are noted in the descriptions of parameters and response models.

Error handling

Visual Recognition uses standard HTTP response codes to indicate whether a method completed successfully. HTTP response codes in the 2xx range indicate success. A response in the 4xx range is some sort of failure, and a response in the 5xx range usually indicates an internal system error that cannot be resolved by the user. Response codes are listed with the method.

ErrorResponse

Name Description
code
string
An identifier of the problem.
Possible values: [invalid_field,invalid_header,invalid_method,missing_field,server_error]
message
string
An explanation of the problem with possible solutions.
more_info
string
A URL for more information about the solution.
target
object
Details about the property (type and name) that is the focus of the problem.

The Java SDK generates an exception for any unsuccessful method invocation. All methods that accept an argument can also throw an IllegalArgumentException.

Exception Description
IllegalArgumentException An invalid argument was passed to the method.

When the Java SDK receives an error response from the Visual Recognition service, it generates an exception from the com.ibm.watson.developer_cloud.service.exception package. All service exceptions contain the following fields.

Field Description
statusCode The HTTP response code that is returned.
message A message that describes the error.

When the Node SDK receives an error response from the Visual Recognition service, it creates an Error object with information that describes the error that occurred. This error object is passed as the first parameter to the callback function for the method. The contents of the error object are as shown in the following table.

Error

Field Description
code The HTTP response code that is returned.
message A message that describes the error.

The Python SDK generates an exception for any unsuccessful method invocation. When the Python SDK receives an error response from the Visual Recognition service, it generates an ApiException with the following fields.

Field Description
code The HTTP response code that is returned.
message A message that describes the error.
info A dictionary of additional information about the error.

When the Ruby SDK receives an error response from the Visual Recognition service, it generates an ApiException with the following fields.

Field Description
code The HTTP response code that is returned.
message A message that describes the error.
info A dictionary of additional information about the error.

The Go SDK generates an error for any unsuccessful service instantiation and method invocation. You can check for the error immediately. The contents of the error object are as shown in the following table.

Error

Field Description
code The HTTP response code that is returned.
message A message that describes the error.

The Swift SDK returns a WatsonError in the completionHandler any unsuccessful method invocation. This error type is an enum that conforms to LocalizedError and contains an errorDescription property that returns an error message. Some of the WatsonError cases contain associated values that reveal more information about the error.

Field Description
errorDescription A message that describes the error.

When the .NET Standard SDK receives an error response from the Visual Recognition service, it generates a ServiceResponseException with the following fields.

Field Description
Message A message that describes the error.
CodeDescription The HTTP response code that is returned.

When the Unity SDK receives an error response from the Visual Recognition service, it generates an IBMError with the following fields.

Field Description
Url The URL that generated the error.
StatusCode The HTTP response code returned.
ErrorMessage A message that describes the error.
Response The contents of the response from the server.
ResponseHeaders A dictionary of headers returned by the request.

Example error handling

try {
  // Invoke a method
} catch (NotFoundException e) {
  // Handle Not Found (404) exception
} catch (RequestTooLargeException e) {
  // Handle Request Too Large (413) exception
} catch (ServiceResponseException e) {
  // Base class for all exceptions caused by error responses from the service
  System.out.println("Service returned status code "
    + e.getStatusCode() + ": " + e.getMessage());
}

Example error handling

visualRecognition.method(params)
  .catch(err => {
    console.log('error:', err);
  });

Example error handling

from ibm_watson import ApiException
try:
    # Invoke a method
except ApiException as ex:
    print "Method failed with status code " + str(ex.code) + ": " + ex.message

Example error handling

require "ibm_watson"
begin
  # Invoke a method
rescue IBMWatson::ApiException => ex
  print "Method failed with status code #{ex.code}: #{ex.error}"
end

Example error handling

import "github.com/watson-developer-cloud/go-sdk/visualrecognitionv4"

// Instantiate a service
visualRecognition, visualRecognitionErr := visualrecognitionv4.NewVisualRecognitionV4(options)

// Check for errors
if visualRecognitionErr != nil {
  panic(visualRecognitionErr)
}

// Call a method
result, _, responseErr := visualRecognition.MethodName(&methodOptions)

// Check for errors
if responseErr != nil {
  panic(responseErr)
}

Example error handling

visualRecognition.method() {
  response, error in

  if let error = error {
    switch error {
    case let .http(statusCode, message, metadata):
      switch statusCode {
      case .some(404):
        // Handle Not Found (404) exception
        print("Not found")
      case .some(413):
        // Handle Request Too Large (413) exception
        print("Payload too large")
      default:
        if let statusCode = statusCode {
          print("Error - code: \(statusCode), \(message ?? "")")
        }
      }
    default:
      print(error.localizedDescription)
    }
    return
  }

  guard let result = response?.result else {
    print(error?.localizedDescription ?? "unknown error")
    return
  }

  print(result)
}

Example error handling

try
{
    // Invoke a method
}
catch(ServiceResponseException e)
{
    Console.WriteLine("Error: " + e.Message);
}
catch (Exception e)
{
    Console.WriteLine("Error: " + e.Message);
}

Example error handling

// Invoke a method
visualRecognition.MethodName(Callback, Parameters);

// Check for errors
private void Callback(DetailedResponse<ExampleResponse> response, IBMError error)
{
    if (error == null)
    {
        Log.Debug("ExampleCallback", "Response received: {0}", response.Response);
    }
    else
    {
        Log.Debug("ExampleCallback", "Error received: {0}, {1}, {3}", error.StatusCode, error.ErrorMessage, error.Response);
    }
}

Data handling

Additional headers

Some Watson services accept special parameters in headers that are passed with the request.

You can pass request header parameters in all requests or in a single request to the service.

To pass a request header, use the --header (-H) option with a curl request.

To pass header parameters with every request, use the setDefaultHeaders method of the service object. See Data collection for an example use of this method.

To pass header parameters in a single request, use the addHeader method as a modifier on the request before you execute it.

To pass header parameters with every request, specify the headers parameter when you create the service object. See Data collection for an example use of this method.

To pass header parameters in a single request, use the headers method as a modifier on the request before you execute it.

To pass header parameters with every request, specify the set_default_headers method of the service object. See Data collection for an example use of this method.

To pass header parameters in a single request, include headers as a dict in the request.

To pass header parameters with every request, specify the add_default_headers method of the service object. See Data collection for an example use of this method.

To pass header parameters in a single request, specify the headers method as a chainable method in the request.

To pass header parameters with every request, specify the SetDefaultHeaders method of the service object. See Data collection for an example use of this method.

To pass header parameters in a single request, specify the Headers as a map in the request.

To pass header parameters with every request, add them to the defaultHeaders property of the service object. See Data collection for an example use of this method.

To pass header parameters in a single request, pass the headers parameter to the request method.

To pass header parameters in a single request, use the WithHeader() method as a modifier on the request before you execute it. See Data collection for an example use of this method.

To pass header parameters in a single request, use the WithHeader() method as a modifier on the request before you execute it.

Example header parameter in a request

curl -X {request_method} -H "Request-Header: {header_value}" "{url}/v4/{method}"

Example header parameter in a request

ReturnType returnValue = visualRecognition.methodName(parameters)
  .addHeader("Custom-Header", "{header_value}")
  .execute();

Example header parameter in a request

const parameters = {
  {parameters}
};

visualRecognition.methodName(
  parameters,
  headers: {
    'Custom-Header': '{header_value}'
  })
   .then(result => {
    console.log(response);
  })
  .catch(err => {
    console.log('error:', err);
  });

Example header parameter in a request

response = visual_recognition.methodName(
    parameters,
    headers = {
        'Custom-Header': '{header_value}'
    })

Example header parameter in a request

response = visual_recognition.headers(
  "Custom-Header" => "{header_value}"
).methodName(parameters)

Example header parameter in a request

result, _, responseErr := visualRecognition.MethodName(
  &methodOptions{
    Headers: map[string]string{
      "Accept": "application/json",
    },
  },
)

Example header parameter in a request

let customHeader: [String: String] = ["Custom-Header": "{header_value}"]
visualRecognition.methodName(parameters, headers: customHeader) {
  response, error in
}

Example header parameter in a request

IamAuthenticator authenticator = new IamAuthenticator(
    apikey: "{apikey}"
    );

VisualRecognitionService visualRecognition = new VisualRecognitionService("{version}", authenticator);
visualRecognition.SetServiceUrl("{url}");

visualRecognition.WithHeader("Custom-Header", "header_value");

Example header parameter in a request

var authenticator = new IamAuthenticator(
    apikey: "{apikey}"
);

while (!authenticator.CanAuthenticate())
    yield return null;

var visualRecognition = new VisualRecognitionService("{version}", authenticator);
visualRecognition.SetServiceUrl("{url}");

visualRecognition.WithHeader("Custom-Header", "header_value");

Response details

The Visual Recognition service might return information to the application in response headers.

To access all response headers that the service returns, include the --include (-i) option with a curl request. To see detailed response data for the request, including request headers, response headers, and extra debugging information, include the --verbose (-v) option with the request.

Example request to access response headers

curl -X {request_method} {authentication_method} --include "{url}/v4/{method}"

To access information in the response headers, use one of the request methods that returns details with the response: executeWithDetails(), enqueueWithDetails(), or rxWithDetails(). These methods return a Response<T> object, where T is the expected response model. Use the getResult() method to access the response object for the method, and use the getHeaders() method to access information in response headers.

Example request to access response headers

Response<ReturnType> response = visualRecognition.methodName(parameters)
  .executeWithDetails();
// Access response from methodName
ReturnType returnValue = response.getResult();
// Access information in response headers
Headers responseHeaders = response.getHeaders();

All response data is available in the Response<T> object that is returned by each method. To access information in the response object, use the following properties.

Property Description
result Returns the response for the service-specific method.
headers Returns the response header information.
status Returns the HTTP status code.

Example request to access response headers

visualRecognition.methodName(parameters)
  .then(response => {
    console.log(response.headers);
  })
  .catch(err => {
    console.log('error:', err);
  });

The return value from all service methods is a DetailedResponse object. To access information in the result object or response headers, use the following methods.

DetailedResponse

Method Description
get_result() Returns the response for the service-specific method.
get_headers() Returns the response header information.
get_status_code() Returns the HTTP status code.

Example request to access response headers

visual_recognition.set_detailed_response(True)
response = visual_recognition.methodName(parameters)
# Access response from methodName
print(json.dumps(response.get_result(), indent=2))
# Access information in response headers
print(response.get_headers())
# Access HTTP response status
print(response.get_status_code())

The return value from all service methods is a DetailedResponse object. To access information in the response object, use the following properties.

DetailedResponse

Property Description
result Returns the response for the service-specific method.
headers Returns the response header information.
status Returns the HTTP status code.

Example request to access response headers

response = visual_recognition.methodName(parameters)
# Access response from methodName
print response.result
# Access information in response headers
print response.headers
# Access HTTP response status
print response.status

The return value from all service methods is a DetailedResponse object. To access information in the response object or response headers, use the following methods.

DetailedResponse

Method Description
GetResult() Returns the response for the service-specific method.
GetHeaders() Returns the response header information.
GetStatusCode() Returns the HTTP status code.

Example request to access response headers

import (
  "github.com/IBM/go-sdk-core/core"
  "github.com/watson-developer-cloud/go-sdk/visualrecognitionv4"
)
result, response, responseErr := visualRecognition.MethodName(
  &methodOptions{})
// Access result
core.PrettyPrint(response.GetResult(), "Result ")

// Access response headers
core.PrettyPrint(response.GetHeaders(), "Headers ")

// Access status code
core.PrettyPrint(response.GetStatusCode(), "Status Code ")

All response data is available in the WatsonResponse<T> object that is returned in each method's completionHandler.

Example request to access response headers

visualRecognition.methodName(parameters) {
  response, error in

  guard let result = response?.result else {
    print(error?.localizedDescription ?? "unknown error")
    return
  }
  print(result) // The data returned by the service
  print(response?.statusCode)
  print(response?.headers)
}

The response contains fields for response headers, response JSON, and the status code.

DetailedResponse

Property Description
Result Returns the result for the service-specific method.
Response Returns the raw JSON response for the service-specific method.
Headers Returns the response header information.
StatusCode Returns the HTTP status code.

Example request to access response headers

var results = visualRecognition.MethodName(parameters);

var result = results.Result;            //  The result object
var responseHeaders = results.Headers;  //  The response headers
var responseJson = results.Response;    //  The raw response JSON
var statusCode = results.StatusCode;    //  The response status code

The response contains fields for response headers, response JSON, and the status code.

DetailedResponse

Property Description
Result Returns the result for the service-specific method.
Response Returns the raw JSON response for the service-specific method.
Headers Returns the response header information.
StatusCode Returns the HTTP status code.

Example request to access response headers

private void Example()
{
    visualRecognition.MethodName(Callback, Parameters);
}

private void Callback(DetailedResponse<ResponseType> response, IBMError error)
{
    var result = response.Result;                 //  The result object
    var responseHeaders = response.Headers;       //  The response headers
    var responseJson = reresponsesults.Response;  //  The raw response JSON
    var statusCode = response.StatusCode;         //  The response status code
}

Data labels

You can remove data associated with a specific customer if you label the data with a customer ID when you send a request to the service.

  • Use the X-Watson-Metadata header to associate a customer ID with the data. By adding a customer ID to a request, you indicate that it contains data that belongs to that customer.

    Specify a random or generic string for the customer ID. Do not include personal data, such as an email address. Pass the string customer_id={id} as the argument of the header.

    Labeling data is used only by methods that accept customer data.

  • Use the Delete labeled data method to remove data that is associated with a customer ID.

Use this process of labeling and deleting data only when you want to remove the data that is associated with a single customer, not when you want to remove data for multiple customers. For more information about Visual Recognition and labeling data, see Information security.

For more information about how to pass headers, see Additional headers.

Data collection

By default, Visual Recognition service instances that are not part of Premium plans collect data about API requests and their results. This data is collected only to improve the services for future users. The collected data is not shared or made public. Data is not collected for services that are part of Premium plans.

To prevent IBM usage of your data for an API request, set the X-Watson-Learning-Opt-Out header parameter to true.

You must set the header on each request that you do not want IBM to access for general service improvements.

You can set the header by using the setDefaultHeaders method of the service object.

You can set the header by using the headers parameter when you create the service object.

You can set the header by using the set_default_headers method of the service object.

You can set the header by using the add_default_headers method of the service object.

You can set the header by using the SetDefaultHeaders method of the service object.

You can set the header by adding it to the defaultHeaders property of the service object.

You can set the header by using the WithHeader() method of the service object.

Example request

curl -u "apikey:{apikey}" -H "X-Watson-Learning-Opt-Out: true" "{url}/{method}"

Example request

Map<String, String> headers = new HashMap<String, String>();
headers.put("X-Watson-Learning-Opt-Out", "true");

visualRecognition.setDefaultHeaders(headers);

Example request

const VisualRecognitionV4 = require('ibm-watson/visual-recognition/v4');
const { IamAuthenticator } = require('ibm-watson/auth');

const visualRecognition = new VisualRecognitionV4({
  version: '{version}',
  authenticator: new IamAuthenticator({
    apikey: '{apikey}',
  }),
  serviceUrl: '{url}',
  headers: {
    'X-Watson-Learning-Opt-Out': 'true'
  }
});

Example request

visual_recognition.set_default_headers({'x-watson-learning-opt-out': "true"})

Example request

visual_recognition.add_default_headers(headers: {"x-watson-learning-opt-out" => "true"})

Example request

import "net/http"

headers := http.Header{}
headers.Add("x-watson-learning-opt-out", "true")
visualRecognition.SetDefaultHeaders(headers)

Example request

visualRecognition.defaultHeaders["X-Watson-Learning-Opt-Out"] = "true"

Example request

IamAuthenticator authenticator = new IamAuthenticator(
    apikey: "{apikey}"
    );

VisualRecognitionService visualRecognition = new VisualRecognitionService("{version}", authenticator);
visualRecognition.SetServiceUrl("{url}");

visualRecognition.WithHeader("X-Watson-Learning-Opt-Out", "true");

Example request

var authenticator = new IamAuthenticator(
    apikey: "{apikey}"
);

while (!authenticator.CanAuthenticate())
    yield return null;

var visualRecognition = new VisualRecognitionService("{version}", authenticator);
visualRecognition.SetServiceUrl("{url}");

visualRecognition.WithHeader("X-Watson-Learning-Opt-Out", "true");

Synchronous and asynchronous requests

The Java SDK supports both synchronous (blocking) and asynchronous (non-blocking) execution of service methods. All service methods implement the ServiceCall interface.

  • To call a method synchronously, use the execute method of the ServiceCall interface. You can call the execute method directly from an instance of the service.
  • To call a method asynchronously, use the enqueue method of the ServiceCall interface to receive a callback when the response arrives. The ServiceCallback interface of the method's argument provides onResponse and onFailure methods that you override to handle the callback.

The Ruby SDK supports both synchronous (blocking) and asynchronous (non-blocking) execution of service methods. All service methods implement the Concurrent::Async module. When you use the synchronous or asynchronous methods, an IVar object is returned. You access the DetailedResponse object by calling ivar_object.value.

For more information about the Ivar object, see the IVar class docs.

  • To call a method synchronously, either call the method directly or use the .await chainable method of the Concurrent::Async module.

    Calling a method directly (without .await) returns a DetailedResponse object.

  • To call a method asynchronously, use the .async chainable method of the Concurrent::Async module.

You can call the .await and .async methods directly from an instance of the service.

Example synchronous request

ReturnType returnValue = visualRecognition.method(parameters).execute();

Example asynchronous request

visualRecognition.method(parameters).enqueue(new ServiceCallback<ReturnType>() {
  @Override public void onResponse(ReturnType response) {
    . . .
  }
  @Override public void onFailure(Exception e) {
    . . .
  }
});

Example synchronous request

response = visual_recognition.method_name(parameters)

or

response = visual_recognition.await.method_name(parameters)

Example asynchronous request

response = visual_recognition.async.method_name(parameters)

Methods

Analyze images

Analyze images by URL, by file, or both against your own collection. Make sure that training_status.objects.ready is true for the feature before you use a collection to analyze images.

Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Analyze images by URL, by file, or both against your own collection. Make sure that training_status.objects.ready is true for the feature before you use a collection to analyze images.

Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Analyze images by URL, by file, or both against your own collection. Make sure that training_status.objects.ready is true for the feature before you use a collection to analyze images.

Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Analyze images by URL, by file, or both against your own collection. Make sure that training_status.objects.ready is true for the feature before you use a collection to analyze images.

Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Analyze images by URL, by file, or both against your own collection. Make sure that training_status.objects.ready is true for the feature before you use a collection to analyze images.

Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Analyze images by URL, by file, or both against your own collection. Make sure that training_status.objects.ready is true for the feature before you use a collection to analyze images.

Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Analyze images by URL, by file, or both against your own collection. Make sure that training_status.objects.ready is true for the feature before you use a collection to analyze images.

Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Analyze images by URL, by file, or both against your own collection. Make sure that training_status.objects.ready is true for the feature before you use a collection to analyze images.

Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Analyze images by URL, by file, or both against your own collection. Make sure that training_status.objects.ready is true for the feature before you use a collection to analyze images.

Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

POST /v4/analyze
(visualRecognition *VisualRecognitionV4) Analyze(analyzeOptions *AnalyzeOptions) (result *AnalyzeResponse, response *core.DetailedResponse, err error)
(visualRecognition *VisualRecognitionV4) AnalyzeWithContext(ctx context.Context, analyzeOptions *AnalyzeOptions) (result *AnalyzeResponse, response *core.DetailedResponse, err error)
ServiceCall<AnalyzeResponse> analyze(AnalyzeOptions analyzeOptions)
analyze(params)
analyze(self,
        collection_ids: List[str],
        features: List[str],
        *,
        images_file: List[BinaryIO] = None,
        image_url: List[str] = None,
        threshold: float = None,
        **kwargs
    ) -> DetailedResponse
analyze(collection_ids:, features:, images_file: nil, image_url: nil, threshold: nil)
func analyze(
    collectionIDs: [String],
    features: [String],
    imagesFile: [FileWithMetadata]? = nil,
    imageURL: [String]? = nil,
    threshold: Double? = nil,
    headers: [String: String]? = nil,
    completionHandler: @escaping (WatsonResponse<AnalyzeResponse>?, WatsonError?) -> Void)
Analyze(List<string> collectionIds, List<string> features, List<FileWithMetadata> imagesFile = null, List<string> imageUrl = null, float? threshold = null)
Analyze(Callback<AnalyzeResponse> callback, List<string> collectionIds, List<string> features, List<FileWithMetadata> imagesFile = null, List<string> imageUrl = null, float? threshold = null)

Request

Instantiate the AnalyzeOptions struct and set the fields to provide parameter values for the Analyze method.

Use the AnalyzeOptions.Builder to create a AnalyzeOptions object that contains the parameter values for the analyze method.

Query Parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

Form Parameters

  • The IDs of the collections to analyze.

  • The features to analyze.

    Allowable values: [objects]

  • An array of image files (.jpg or .png) or .zip files with images.

    • Include a maximum of 20 images in a request.
    • Limit the .zip file to 100 MB.
    • Limit each image file to 10 MB.

    You can also include an image with the image_url parameter.

  • An array of URLs of image files (.jpg or .png).

    • Include a maximum of 20 images in a request.
    • Limit each image file to 10 MB.
    • Minimum width and height is 30 pixels, but the service tends to perform better with images that are at least 300 x 300 pixels. Maximum is 5400 pixels for either height or width.

    You can also include images with the images_file parameter.

  • The minimum score a feature must have to be returned.

    Possible values: 0.15 ≤ value ≤ 1

    Default: 0.5

WithContext method only

The Analyze options.

The analyze options.

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

  • The IDs of the collections to analyze.

  • The features to analyze.

    Allowable values: [objects]

  • An array of image files (.jpg or .png) or .zip files with images.

    • Include a maximum of 20 images in a request.
    • Limit the .zip file to 100 MB.
    • Limit each image file to 10 MB.

    You can also include an image with the image_url parameter.

  • An array of URLs of image files (.jpg or .png).

    • Include a maximum of 20 images in a request.
    • Limit each image file to 10 MB.
    • Minimum width and height is 30 pixels, but the service tends to perform better with images that are at least 300 x 300 pixels. Maximum is 5400 pixels for either height or width.

    You can also include images with the images_file parameter.

  • The minimum score a feature must have to be returned.

    Possible values: 0.15 ≤ value ≤ 1

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

  • The IDs of the collections to analyze.

  • The features to analyze.

    Allowable values: [objects]

  • An array of image files (.jpg or .png) or .zip files with images.

    • Include a maximum of 20 images in a request.
    • Limit the .zip file to 100 MB.
    • Limit each image file to 10 MB.

    You can also include an image with the image_url parameter.

  • An array of URLs of image files (.jpg or .png).

    • Include a maximum of 20 images in a request.
    • Limit each image file to 10 MB.
    • Minimum width and height is 30 pixels, but the service tends to perform better with images that are at least 300 x 300 pixels. Maximum is 5400 pixels for either height or width.

    You can also include images with the images_file parameter.

  • The minimum score a feature must have to be returned.

    Possible values: 0.15 ≤ value ≤ 1

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

  • The IDs of the collections to analyze.

  • The features to analyze.

    Allowable values: [objects]

  • An array of image files (.jpg or .png) or .zip files with images.

    • Include a maximum of 20 images in a request.
    • Limit the .zip file to 100 MB.
    • Limit each image file to 10 MB.

    You can also include an image with the image_url parameter.

  • An array of URLs of image files (.jpg or .png).

    • Include a maximum of 20 images in a request.
    • Limit each image file to 10 MB.
    • Minimum width and height is 30 pixels, but the service tends to perform better with images that are at least 300 x 300 pixels. Maximum is 5400 pixels for either height or width.

    You can also include images with the images_file parameter.

  • The minimum score a feature must have to be returned.

    Possible values: 0.15 ≤ value ≤ 1

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

  • The IDs of the collections to analyze.

  • The features to analyze.

    Allowable values: [objects]

  • An array of image files (.jpg or .png) or .zip files with images.

    • Include a maximum of 20 images in a request.
    • Limit the .zip file to 100 MB.
    • Limit each image file to 10 MB.

    You can also include an image with the image_url parameter.

  • An array of URLs of image files (.jpg or .png).

    • Include a maximum of 20 images in a request.
    • Limit each image file to 10 MB.
    • Minimum width and height is 30 pixels, but the service tends to perform better with images that are at least 300 x 300 pixels. Maximum is 5400 pixels for either height or width.

    You can also include images with the images_file parameter.

  • The minimum score a feature must have to be returned.

    Possible values: 0.15 ≤ value ≤ 1

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

  • The IDs of the collections to analyze.

  • The features to analyze.

    Allowable values: [objects]

  • An array of image files (.jpg or .png) or .zip files with images.

    • Include a maximum of 20 images in a request.
    • Limit the .zip file to 100 MB.
    • Limit each image file to 10 MB.

    You can also include an image with the image_url parameter.

  • An array of URLs of image files (.jpg or .png).

    • Include a maximum of 20 images in a request.
    • Limit each image file to 10 MB.
    • Minimum width and height is 30 pixels, but the service tends to perform better with images that are at least 300 x 300 pixels. Maximum is 5400 pixels for either height or width.

    You can also include images with the images_file parameter.

  • The minimum score a feature must have to be returned.

    Possible values: 0.15 ≤ value ≤ 1

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

  • The IDs of the collections to analyze.

  • The features to analyze.

    Allowable values: [objects]

  • An array of image files (.jpg or .png) or .zip files with images.

    • Include a maximum of 20 images in a request.
    • Limit the .zip file to 100 MB.
    • Limit each image file to 10 MB.

    You can also include an image with the image_url parameter.

  • An array of URLs of image files (.jpg or .png).

    • Include a maximum of 20 images in a request.
    • Limit each image file to 10 MB.
    • Minimum width and height is 30 pixels, but the service tends to perform better with images that are at least 300 x 300 pixels. Maximum is 5400 pixels for either height or width.

    You can also include images with the images_file parameter.

  • The minimum score a feature must have to be returned.

    Possible values: 0.15 ≤ value ≤ 1

  • curl -X POST -u "apikey:{apikey}" -F "features=objects" -F "collection_ids=5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7" -F "images_file=@honda.jpg" -F "images_file=@dice.png" "{url}/v4/analyze?version=2019-02-11"
  • IamAuthenticator authenticator = new IamAuthenticator(
        apikey: "{apikey}"
        );
    
    VisualRecognitionService visualRecognition = new VisualRecognitionService("2019-02-11", authenticator);
    visualRecognition.SetServiceUrl("{url}");
    
    DetailedResponse<AnalyzeResponse> result = null;
    List<FileWithMetadata> imagesFile = new List<FileWithMetadata>();
    using (FileStream hondaFilestream = File.OpenRead("./honda.jpg"), diceFilestream = File.OpenRead("./dice.png"))
    {
        using (MemoryStream carMemoryStream = new MemoryStream(), diceMemoryStream = new MemoryStream())
        {
            hondaFilestream.CopyTo(carMemoryStream);
            diceFilestream.CopyTo(diceMemoryStream);
            FileWithMetadata hondaFile = new FileWithMetadata()
            {
                Data = carMemoryStream,
                ContentType = "image/jpeg",
                Filename = "honda.jpg"
            };
            FileWithMetadata diceFile = new FileWithMetadata()
            {
                Data = diceMemoryStream,
                ContentType = "image/png",
                Filename = "dice.png"
            };
            imagesFile.Add(hondaFile);
            imagesFile.Add(diceFile);
    
            result = visualRecognition.Analyze(
                collectionIds: new List<string>(){"5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7"},
                features: new List<string>() {"objects"},
                imagesFile: imagesFile);
    
            Console.WriteLine(result.Response);
        }
    }
  • package main
    
    import (
      "encoding/json"
      "fmt"
      "github.com/IBM/go-sdk-core/core"
      "github.com/watson-developer-cloud/go-sdk/v2/visualrecognitionv4"
      "os"
    )
    
    func main() {
      authenticator := &core.IamAuthenticator{
        ApiKey: "{apikey}",
      }
    
      options := &visualrecognitionv4.VisualRecognitionV4Options{
        Version: "2019-02-11",
        Authenticator: authenticator,
      }
    
      visualRecognition, visualRecognitionErr := visualrecognitionv4.NewVisualRecognitionV4(options)
    
      if visualRecognitionErr != nil {
        panic(visualRecognitionErr)
      }
    
      visualRecognition.SetServiceURL("{url}")
    
      hondaFile, hondaFileErr := os.Open("./honda.jpg")
      if hondaFileErr != nil {
        panic(hondaFileErr)
      }
      defer hondaFile.Close()
    
     diceFile, diceFileErr := os.Open("./dice.png")
      if diceFileErr != nil {
        panic(diceFileErr)
      }
      defer diceFile.Close()
    
      result,_, responseErr := visualRecognition.Analyze(
        &visualrecognitionv4.AnalyzeOptions{
          CollectionIds: []string{"5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7"},
          Features: []string{visualrecognitionv4.AnalyzeOptions_Features_Objects},
          ImagesFile: []visualrecognitionv4.FileWithMetadata{
            visualrecognitionv4.FileWithMetadata{
              Data:     hondaFile,
              ContentType:    core.StringPtr("image/jpeg"),
              Filename: core.StringPtr("honda.jpg"),
          },
            visualrecognitionv4.FileWithMetadata{
              Data:     diceFile,
              ContentType:    core.StringPtr("image/png"),
              Filename: core.StringPtr("dice.png"),
          },
        },
      },
      )
      if responseErr != nil {
        panic(responseErr)
      }
      b, _ := json.MarshalIndent(result, "", "  ")
      fmt.Println(string(b))
    }
  • IamAuthenticator authenticator = new IamAuthenticator("{apikey}");
    VisualRecognition visualRecognition = new VisualRecognition("2019-02-11", authenticator);
    visualRecognition.setServiceUrl("{url}");
    
    FileWithMetadata carImage = new FileWithMetadata.Builder()
      .data(new File("./honda.jpg"))
      .contentType("image/jpeg")
      .build();
    FileWithMetadata diceImage = new FileWithMetadata.Builder()
      .data(new File("./dice.jpg"))
      .contentType("image/jpeg")
      .build();
    List<FileWithMetadata> filesToAnalyze = Arrays.asList(carImage, diceImage);
    
    AnalyzeOptions options = new AnalyzeOptions.Builder()
      .imagesFile(filesToAnalyze)
      .addCollectionIds("5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7")
      .addFeatures("objects")
      .build();
    
    AnalyzeResponse response = visualRecognition.analyze(options).execute().getResult();
    System.out.println(response);
  • const fs = require('fs');
    const VisualRecognitionV4 = require('ibm-watson/visual-recognition/v4');
    const { IamAuthenticator } = require('ibm-watson/auth');
    
    const visualRecognition = new VisualRecognitionV4({
      version: '2019-02-11',
      authenticator: new IamAuthenticator({
        apikey: '{apikey}'
      }),
      serviceUrl: '{url}',
    });
    
    const params = {
      imagesFile: [
        {
          data: fs.createReadStream('./honda.jpg'),
          contentType: 'image/jpeg',
        },
        {
          data: fs.createReadStream('./dice.jpg'),
          contentType: 'image/jpeg',
        }
      ],
      collectionIds: ['5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7'],
      features: ['objects'],
    };
    
    visualRecognition.analyze(params)
      .then(response => {
        console.log(JSON.stringify(response.result, null, 2));
      })
      .catch(err => {
        console.log('error: ', err);
      });
  • import json
    from ibm_watson import VisualRecognitionV4
    from ibm_watson.visual_recognition_v4 import AnalyzeEnums, FileWithMetadata
    from ibm_cloud_sdk_core.authenticators import IAMAuthenticator
    
    authenticator = IAMAuthenticator('{apikey}')
    visual_recognition = VisualRecognitionV4(
        version='2019-02-11',
        authenticator=authenticator
    )
    
    visual_recognition.set_service_url('{url}')
    
    with open('honda.jpg', 'rb') as honda_file, open('dice.png', 'rb') as dice_file:
        result = visual_recognition.analyze(
            collection_ids=["5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7"],
            features=[AnalyzeEnums.Features.OBJECTS.value],
            images_file=[
                FileWithMetadata(honda_file),
                FileWithMetadata(dice_file)
            ]).get_result()
        print(json.dumps(result, indent=2))
  • require "json"
    require "ibm_watson/authenticators"
    require "ibm_watson/visual_recognition_v4"
    include IBMWatson
    
    authenticator = Authenticators::IamAuthenticator.new(
      apikey: "{apikey}"
    )
    visual_recognition = VisualRecognitionV4.new(
      version: "2019-02-11",
      authenticator: authenticator
    )
    visual_recognition.service_url = "{url}"
    
    honda_file = File.open(Dir.getwd + "./honda.jpg")
    dice_file = File.open(Dir.getwd + "./dice.png")
    result = visual_recognition.analyze(
      images_file: [
        {
          "data": honda_file,
          "filename": "honda.jpg",
          "content_type": "image/jpeg"
        },
        {
          "data": dice_file,
          "filename": "dice.png",
          "content_type": "image/jpeg"
        }
      ],
      collection_ids: ["5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7"],
      features: ["objects"]
    ).result
    
    puts JSON.pretty_generate(result)
  • let authenticator = WatsonIAMAuthenticator(apiKey: "{apikey}")
    
    let visualRecognition = VisualRecognition(version: "2019-02-11", authenticator: authenticator)
    visualRecognition.serviceURL = "{url}"
    
    let hondaFileURL = Bundle.main.url(forResource: "honda", withExtension: "jpg")
    let hondaImageData = try? Data(contentsOf: hondaFileURL!)
    let hondaImageFile = FileWithMetadata(data: hondaImageData!, filename: "honda.jpg", contentType: "image/jpg")
    
    let diceFileURL = Bundle.main.url(forResource: "honda", withExtension: "jpg")
    let diceImageData = try? Data(contentsOf: diceFileURL!)
    let diceImageFile = FileWithMetadata(data: hondaImageData!, filename: "honda.jpg", contentType: "image/jpg")
    
    visualRecognition.analyze(
      collectionIDs: "5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7",
      features: ["objects"],
      imagesFile: [hondaImageFile, diceImageFile]
    ) {
      response, error in
      
      guard let result = response?.result else {
        print(error?.localizedDescription ?? "unknown error")
        return
      }
      
      print(result)
    }
  • var authenticator = new IamAuthenticator(
        apikey: "{apikey}"
    );
    
    while (!authenticator.CanAuthenticate())
        yield return null;
    
    var visualRecognition = new VisualRecognitionService("2019-02-11", authenticator);
    visualRecognition.SetServiceUrl("{url}");
    
    AnalyzeResponse analyzeResponse = null;
    List<FileWithMetadata> imagesFile = new List<FileWithMetadata>();
    using (FileStream hondaFileStream = File.OpenRead("./honda.jpg"), diceFilestream = File.OpenRead("./dice.png"))
    {
        using (MemoryStream carMemoryStream = new MemoryStream(), diceMemoryStream = new MemoryStream())
        {
            hondaFileStream.CopyTo(carMemoryStream);
            diceFilestream.CopyTo(diceMemoryStream);
            FileWithMetadata hondaFile = new FileWithMetadata()
            {
                Data = carMemoryStream,
                ContentType = "image/jpeg",
                Filename = "honda.jpg"
            };
            imagesFile.Add(hondaFile);
    
            FileWithMetadata diceFile = new FileWithMetadata()
            {
                Data = diceMemoryStream,
                ContentType = "image/png",
                Filename = "dice.png"
            };
            imagesFile.Add(diceFile);
    
            visualRecognition.Analyze(
                callback: (DetailedResponse<AnalyzeResponse> response, IBMError error) =>
                {
                    Log.Debug("VisualRecognitionServiceV4", "Analyze result: {0}", response.Response);
                    analyzeResponse = response.Result;
                },
                collectionIds: new List<string>(){"5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7"},
                features: new List<string>() { "objects" },
                imagesFile: imagesFile
            );
    
            while (analyzeResponse == null)
                yield return null;
        }
    }

Response

Results for all images.

Results for all images.

Results for all images.

Results for all images.

Results for all images.

Results for all images.

Results for all images.

Results for all images.

Results for all images.

Status Code

  • Analyze image results

  • Invalid request from input, such as a bad parameter.

  • Request too large.

Example responses
  • {
      "images": [
        {
          "source": {
            "type": "file",
            "filename": "honda.jpg"
          },
          "dimensions": {
            "height": 450,
            "width": 800
          },
          "objects": {
            "collections": [
              {
                "collection_id": "5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7",
                "objects": [
                  {
                    "object": "automobile",
                    "location": {
                      "left": 33,
                      "top": 8,
                      "width": 760,
                      "height": 419
                    },
                    "score": 0.826962
                  }
                ]
              }
            ]
          }
        },
        {
          "source": {
            "type": "file",
            "filename": "dice.png"
          },
          "dimensions": {
            "height": 233,
            "width": 697
          },
          "objects": {
            "collections": [
              {
                "collection_id": "5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7",
                "name": "my-collection",
                "objects": [
                  {
                    "object": "auto_dice",
                    "location": {
                      "left": 114,
                      "top": 22,
                      "width": 323,
                      "height": 105
                    },
                    "score": 0.715854
                  }
                ]
              }
            ]
          }
        }
      ]
    }
  • {
      "images": [
        {
          "source": {
            "type": "file",
            "filename": "honda.jpg"
          },
          "dimensions": {
            "height": 450,
            "width": 800
          },
          "objects": {
            "collections": [
              {
                "collection_id": "5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7",
                "objects": [
                  {
                    "object": "automobile",
                    "location": {
                      "left": 33,
                      "top": 8,
                      "width": 760,
                      "height": 419
                    },
                    "score": 0.826962
                  }
                ]
              }
            ]
          }
        },
        {
          "source": {
            "type": "file",
            "filename": "dice.png"
          },
          "dimensions": {
            "height": 233,
            "width": 697
          },
          "objects": {
            "collections": [
              {
                "collection_id": "5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7",
                "name": "my-collection",
                "objects": [
                  {
                    "object": "auto_dice",
                    "location": {
                      "left": 114,
                      "top": 22,
                      "width": 323,
                      "height": 105
                    },
                    "score": 0.715854
                  }
                ]
              }
            ]
          }
        }
      ]
    }
  • {
      "errors": [
        {
          "code": "missing_field",
          "message": "The request path is not valid. Make sure that the endpoint is correct.",
          "more_info": "https://cloud.ibm.com/apidocs/visual-recognition-v4",
          "target": {
            "type": "field",
            "name": "URL path"
          }
        }
      ],
      "trace": "4e1b7b85-4dba-4219-b46b-6cdd2e2c06fd"
    }
  • {
      "errors": [
        {
          "code": "missing_field",
          "message": "The request path is not valid. Make sure that the endpoint is correct.",
          "more_info": "https://cloud.ibm.com/apidocs/visual-recognition-v4",
          "target": {
            "type": "field",
            "name": "URL path"
          }
        }
      ],
      "trace": "4e1b7b85-4dba-4219-b46b-6cdd2e2c06fd"
    }

Create a collection

Create a collection that can be used to store images.

To create a collection without specifying a name and description, include an empty JSON object in the request body.

Encode the name and description in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Create a collection that can be used to store images.

To create a collection without specifying a name and description, include an empty JSON object in the request body.

Encode the name and description in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Create a collection that can be used to store images.

To create a collection without specifying a name and description, include an empty JSON object in the request body.

Encode the name and description in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Create a collection that can be used to store images.

To create a collection without specifying a name and description, include an empty JSON object in the request body.

Encode the name and description in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Create a collection that can be used to store images.

To create a collection without specifying a name and description, include an empty JSON object in the request body.

Encode the name and description in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Create a collection that can be used to store images.

To create a collection without specifying a name and description, include an empty JSON object in the request body.

Encode the name and description in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Create a collection that can be used to store images.

To create a collection without specifying a name and description, include an empty JSON object in the request body.

Encode the name and description in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Create a collection that can be used to store images.

To create a collection without specifying a name and description, include an empty JSON object in the request body.

Encode the name and description in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Create a collection that can be used to store images.

To create a collection without specifying a name and description, include an empty JSON object in the request body.

Encode the name and description in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

POST /v4/collections
(visualRecognition *VisualRecognitionV4) CreateCollection(createCollectionOptions *CreateCollectionOptions) (result *Collection, response *core.DetailedResponse, err error)
(visualRecognition *VisualRecognitionV4) CreateCollectionWithContext(ctx context.Context, createCollectionOptions *CreateCollectionOptions) (result *Collection, response *core.DetailedResponse, err error)
ServiceCall<Collection> createCollection(CreateCollectionOptions createCollectionOptions)
createCollection(params)
create_collection(self,
        *,
        name: str = None,
        description: str = None,
        training_status: 'TrainingStatus' = None,
        **kwargs
    ) -> DetailedResponse
create_collection(name: nil, description: nil, training_status: nil)
func createCollection(
    name: String? = nil,
    description: String? = nil,
    trainingStatus: TrainingStatus? = nil,
    headers: [String: String]? = nil,
    completionHandler: @escaping (WatsonResponse<Collection>?, WatsonError?) -> Void)
CreateCollection(string name = null, string description = null, TrainingStatus trainingStatus = null)
CreateCollection(Callback<Collection> callback, string name = null, string description = null, TrainingStatus trainingStatus = null)

Request

Instantiate the CreateCollectionOptions struct and set the fields to provide parameter values for the CreateCollection method.

Use the CreateCollectionOptions.Builder to create a CreateCollectionOptions object that contains the parameter values for the createCollection method.

Query Parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

The new collection.

WithContext method only

The CreateCollection options.

The createCollection options.

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

  • The name of the collection. The name can contain alphanumeric, underscore, hyphen, and dot characters. It cannot begin with the reserved prefix sys-.

    Possible values: length ≤ 64, Value must match regular expression /^(?!sys-)[\\pL\\pN_\\-.]*$/

  • The description of the collection.

  • Training status information for the collection.

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

  • The name of the collection. The name can contain alphanumeric, underscore, hyphen, and dot characters. It cannot begin with the reserved prefix sys-.

    Possible values: length ≤ 64, Value must match regular expression /^(?!sys-)[\\pL\\pN_\\-.]*$/

  • The description of the collection.

  • Training status information for the collection.

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

  • The name of the collection. The name can contain alphanumeric, underscore, hyphen, and dot characters. It cannot begin with the reserved prefix sys-.

    Possible values: length ≤ 64, Value must match regular expression /^(?!sys-)[\\pL\\pN_\\-.]*$/

  • The description of the collection.

  • Training status information for the collection.

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

  • The name of the collection. The name can contain alphanumeric, underscore, hyphen, and dot characters. It cannot begin with the reserved prefix sys-.

    Possible values: length ≤ 64, Value must match regular expression /^(?!sys-)[\\pL\\pN_\\-.]*$/

  • The description of the collection.

  • Training status information for the collection.

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

  • The name of the collection. The name can contain alphanumeric, underscore, hyphen, and dot characters. It cannot begin with the reserved prefix sys-.

    Possible values: length ≤ 64, Value must match regular expression /^(?!sys-)[\\pL\\pN_\\-.]*$/

  • The description of the collection.

  • Training status information for the collection.

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

  • The name of the collection. The name can contain alphanumeric, underscore, hyphen, and dot characters. It cannot begin with the reserved prefix sys-.

    Possible values: length ≤ 64, Value must match regular expression /^(?!sys-)[\\pL\\pN_\\-.]*$/

  • The description of the collection.

  • Training status information for the collection.

  • curl -X POST -u "apikey:{apikey}" -H "Content-Type: application/json" -d "{\"name\":\"my-collection\", \"description\":\"A description of my collection\"}" "{url}/v4/collections?version=2019-02-11"
  • IamAuthenticator authenticator = new IamAuthenticator(
        apikey: "{apikey}"
        );
    
    VisualRecognitionService visualRecognition = new VisualRecognitionService("2019-02-11", authenticator);
    visualRecognition.SetServiceUrl("{url}");
    
    var result = visualRecognition.CreateCollection(
        name: Utility.ConvertToUtf8("my-collection"),
        description: Utility.ConvertToUtf8("A description of my collection")
        );
    
    Console.WriteLine(result.Response);
  • package main
    
    import (
      "encoding/json"
      "fmt"
      "github.com/IBM/go-sdk-core/core"
      "github.com/watson-developer-cloud/go-sdk/v2/visualrecognitionv4"
    )
    
    func main() {
      authenticator := &core.IamAuthenticator{
        ApiKey: "{apikey}",
      }
    
      options := &visualrecognitionv4.VisualRecognitionV4Options{
        Version: "2019-02-11",
        Authenticator: authenticator,
      }
    
      visualRecognition, visualRecognitionErr := visualrecognitionv4.NewVisualRecognitionV4(options)
    
      if visualRecognitionErr != nil {
        panic(visualRecognitionErr)
      }
    
      visualRecognition.SetServiceURL("{url}")
    
      result,_, responseErr := visualRecognition.CreateCollection(
        &visualrecognitionv4.CreateCollectionOptions{
          Name:        core.StringPtr("my-collection"),
          Description: core.StringPtr("A description of my collection"),
       },
      )
      if responseErr != nil {
        panic(responseErr)
      }
      b, _ := json.MarshalIndent(result, "", "  ")
      fmt.Println(string(b))
    }
  • IamAuthenticator authenticator = new IamAuthenticator("{apikey}");
    VisualRecognition visualRecognition = new VisualRecognition("2019-02-11", authenticator);
    visualRecognition.setServiceUrl("{url}");
    
    String collectionName = "my-collection";
    String collectionDescription = "A description of my collection";
    
    CreateCollectionOptions options = new CreateCollectionOptions.Builder()
      .name(collectionName)
      .description(collectionDescription)
      .build();
    
    Collection response = visualRecognition.createCollection(options).execute().getResult();
    System.out.println(response);
  • const VisualRecognitionV4 = require('ibm-watson/visual-recognition/v4');
    const { IamAuthenticator } = require('ibm-watson/auth');
    
    const visualRecognition = new VisualRecognitionV4({
      version: '2019-02-11',
      authenticator: new IamAuthenticator({
        apikey: '{apikey}'
      }),
      serviceUrl: '{url}',
    });
    
    const params = {
      name: 'my-collection',
      description: 'A description of my collection',
    };
    
    visualRecognition.createCollection(params)
      .then(response => {
        console.log(JSON.stringify(response.result, null, 2));
      })
      .catch(err => {
        console.log('error: ', err);
      });
    
  • import json
    from ibm_watson import VisualRecognitionV4
    from ibm_cloud_sdk_core.authenticators import IAMAuthenticator
    
    authenticator = IAMAuthenticator('{apikey}')
    visual_recognition = VisualRecognitionV4(
        version='2019-02-11',
        authenticator=authenticator
    )
    
    visual_recognition.set_service_url('{url}')
    
    result = visual_recognition.create_collection(
        name='my-collection',
        description='A description of my collection').get_result()
    print(json.dumps(result, indent=2))
  • require "json"
    require "ibm_watson/authenticators"
    require "ibm_watson/visual_recognition_v4"
    include IBMWatson
    
    authenticator = Authenticators::IamAuthenticator.new(
      apikey: "{apikey}"
    )
    visual_recognition = VisualRecognitionV4.new(
      version: "2019-02-11",
      authenticator: authenticator
    )
    visual_recognition.service_url = "{url}"
    
    result = visual_recognition.create_collection(
      name: "my-collection",
      description: "A description of my collection"
    ).result
    puts JSON.pretty_generate(result)
  • let authenticator = WatsonIAMAuthenticator(apiKey: "{apikey}")
    
    let visualRecognition = VisualRecognition(version: "2019-02-11", authenticator: authenticator)
    visualRecognition.serviceURL = "{url}"
    
    visualRecognition.createCollection(
      name: "my-collection",
      description: "A description of my collection"
    ) {
      response, error in
      
      guard let result = response?.result else {
        print(error?.localizedDescription ?? "unknown error")
        return
      }
      
      print(result)
    }
  • var authenticator = new IamAuthenticator(
        apikey: "{apikey}"
    );
    
    while (!authenticator.CanAuthenticate())
        yield return null;
    
    var visualRecognition = new VisualRecognitionService("2019-02-11", authenticator);
    visualRecognition.SetServiceUrl("{url}");
    
    Collection collection = null;
    visualRecognition.CreateCollection(
        callback: (DetailedResponse<Collection> response, IBMError error) =>
        {
            Log.Debug("VisualRecognitionServiceV4", "CreateCollection result: {0}", response.Response);
            collection = response.Result;
        },
        name: "my-collection",
        description: "A description of my collection"
    );
    
    while (collection == null)
        yield return null;

Response

Details about a collection.

Details about a collection.

Details about a collection.

Details about a collection.

Details about a collection.

Details about a collection.

Details about a collection.

Details about a collection.

Details about a collection.

Status Code

  • Collection details

  • Invalid request from input, such as a bad parameter.

Example responses
  • {
      "collection_id": "5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7",
      "name": "my-collection",
      "description": "A description of my collection",
      "created": "2019-01-14T16:22:27.77628825Z",
      "updated": "2019-01-14T16:22:27.77628825Z",
      "image_count": 0,
      "training_status": {
        "objects": {
          "ready": false,
          "in_progress": false,
          "data_changed": false,
          "latest_failed": false,
          "rscnn_ready": false,
          "description": ""
        }
      }
    }
  • {
      "collection_id": "5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7",
      "name": "my-collection",
      "description": "A description of my collection",
      "created": "2019-01-14T16:22:27.77628825Z",
      "updated": "2019-01-14T16:22:27.77628825Z",
      "image_count": 0,
      "training_status": {
        "objects": {
          "ready": false,
          "in_progress": false,
          "data_changed": false,
          "latest_failed": false,
          "rscnn_ready": false,
          "description": ""
        }
      }
    }
  • {
      "errors": [
        {
          "code": "invalid_field",
          "message": "The request body must be valid JSON.",
          "more_info": "https://cloud.ibm.com/apidocs/visual-recognition-v4#create-a-collection",
          "target": {
            "type": "field",
            "name": "collection_info"
          }
        }
      ],
      "trace": "4e1b7b85-4dba-4219-b46b-6cdd2e2c06fd"
    }
  • {
      "errors": [
        {
          "code": "invalid_field",
          "message": "The request body must be valid JSON.",
          "more_info": "https://cloud.ibm.com/apidocs/visual-recognition-v4#create-a-collection",
          "target": {
            "type": "field",
            "name": "collection_info"
          }
        }
      ],
      "trace": "4e1b7b85-4dba-4219-b46b-6cdd2e2c06fd"
    }

List collections

Retrieves a list of collections for the service instance.

Retrieves a list of collections for the service instance.

Retrieves a list of collections for the service instance.

Retrieves a list of collections for the service instance.

Retrieves a list of collections for the service instance.

Retrieves a list of collections for the service instance.

Retrieves a list of collections for the service instance.

Retrieves a list of collections for the service instance.

Retrieves a list of collections for the service instance.

GET /v4/collections
(visualRecognition *VisualRecognitionV4) ListCollections(listCollectionsOptions *ListCollectionsOptions) (result *CollectionsList, response *core.DetailedResponse, err error)
(visualRecognition *VisualRecognitionV4) ListCollectionsWithContext(ctx context.Context, listCollectionsOptions *ListCollectionsOptions) (result *CollectionsList, response *core.DetailedResponse, err error)
ServiceCall<CollectionsList> listCollections(ListCollectionsOptions listCollectionsOptions)
listCollections(params)
list_collections(self,
        **kwargs
    ) -> DetailedResponse
list_collections
func listCollections(
    headers: [String: String]? = nil,
    completionHandler: @escaping (WatsonResponse<CollectionsList>?, WatsonError?) -> Void)
ListCollections()
ListCollections(Callback<CollectionsList> callback)

Request

Instantiate the ListCollectionsOptions struct and set the fields to provide parameter values for the ListCollections method.

Use the ListCollectionsOptions.Builder to create a ListCollectionsOptions object that contains the parameter values for the listCollections method.

Query Parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

WithContext method only

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

  • curl -u "apikey:{apikey}" "{url}/v4/collections?version=2019-02-11"
  • IamAuthenticator authenticator = new IamAuthenticator(
        apikey: "{apikey}"
        );
    
    VisualRecognitionService visualRecognition = new VisualRecognitionService("2019-02-11", authenticator);
    visualRecognition.SetServiceUrl("{url}");
    
    var result = visualRecognition.ListCollections();
    
    Console.WriteLine(result.Response);
  • package main
    
    import (
      "encoding/json"
      "fmt"
      "github.com/IBM/go-sdk-core/core"
      "github.com/watson-developer-cloud/go-sdk/v2/visualrecognitionv4"
    )
    
    func main() {
      authenticator := &core.IamAuthenticator{
        ApiKey: "{apikey}",
      }
    
      options := &visualrecognitionv4.VisualRecognitionV4Options{
        Version: "2019-02-11",
        Authenticator: authenticator,
      }
    
      visualRecognition, visualRecognitionErr := visualrecognitionv4.NewVisualRecognitionV4(options)
    
      if visualRecognitionErr != nil {
        panic(visualRecognitionErr)
      }
    
      visualRecognition.SetServiceURL("{url}")
    
      result,_, responseErr := visualRecognition.ListCollections(
        &visualrecognitionv4.ListCollectionsOptions{},
      )
      if responseErr != nil {
        panic(responseErr)
      }
      b, _ := json.MarshalIndent(result, "", "  ")
      fmt.Println(string(b))
    }
  • IamAuthenticator authenticator = new IamAuthenticator("{apikey}");
    VisualRecognition visualRecognition = new VisualRecognition("2019-02-11", authenticator);
    visualRecognition.setServiceUrl("{url}");
    
    CollectionsList response = visualRecognition.listCollections().execute().getResult();
    System.out.println(response);
  • const VisualRecognitionV4 = require('ibm-watson/visual-recognition/v4');
    const { IamAuthenticator } = require('ibm-watson/auth');
    
    const visualRecognition = new VisualRecognitionV4({
      version: '2019-02-11',
      authenticator: new IamAuthenticator({
        apikey: '{apikey}'
      }),
      serviceUrl: '{url}',
    });
    
    visualRecognition.listCollections()
      .then(response => {
        console.log(JSON.stringify(response.result, null, 2));
      })
      .catch(err => {
        console.log('error: ', err);
      });
    
  • import json
    from ibm_watson import VisualRecognitionV4
    from ibm_cloud_sdk_core.authenticators import IAMAuthenticator
    
    authenticator = IAMAuthenticator('{apikey}')
    visual_recognition = VisualRecognitionV4(
        version='2019-02-11',
        authenticator=authenticator
    )
    
    visual_recognition.set_service_url('{url}')
    
    result = visual_recognition.list_collections().get_result()
    print(json.dumps(result, indent=2))
  • require "json"
    require "ibm_watson/authenticators"
    require "ibm_watson/visual_recognition_v4"
    include IBMWatson
    
    authenticator = Authenticators::IamAuthenticator.new(
      apikey: "{apikey}"
    )
    visual_recognition = VisualRecognitionV4.new(
      version: "2019-02-11",
      authenticator: authenticator
    )
    visual_recognition.service_url = "{url}"
    
    result = visual_recognition.list_collections.result
    puts JSON.pretty_generate(result)
  • let authenticator = WatsonIAMAuthenticator(apiKey: "{apikey}")
    
    let visualRecognition = VisualRecognition(version: "2019-02-11", authenticator: authenticator)
    visualRecognition.serviceURL = "{url}"
    
    visualRecognition.listCollections() { response, error in
      
      guard let result = response?.result else {
        print(error?.localizedDescription ?? "unknown error")
        return
      }
      
      print(result)
    }
  • var authenticator = new IamAuthenticator(
        apikey: "{apikey}"
    );
    
    while (!authenticator.CanAuthenticate())
        yield return null;
    
    var visualRecognition = new VisualRecognitionService("2019-02-11", authenticator);
    visualRecognition.SetServiceUrl("{url}");
    
    CollectionsList collectionsList = null;
    visualRecognition.ListCollections(
        callback: (DetailedResponse<CollectionsList> response, IBMError error) =>
        {
            Log.Debug("VisualRecognitionServiceV4", "List Collections result: {0}", response.Response);
            collectionsList = response.Result;
        }
    );
    
    while (collectionsList == null)
        yield return null;

Response

A container for the list of collections.

A container for the list of collections.

A container for the list of collections.

A container for the list of collections.

A container for the list of collections.

A container for the list of collections.

A container for the list of collections.

A container for the list of collections.

A container for the list of collections.

Status Code

  • Collection details

Example responses
  • {
      "collections": [
        {
          "collection_id": "5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7",
          "name": "my-collection",
          "description": "A description of my collection",
          "created": "2019-01-14T16:22:27.776288Z",
          "updated": "2019-01-14T16:22:27.776288Z",
          "image_count": 0,
          "training_status": {
            "objects": {
              "ready": true,
              "in_progress": false,
              "data_changed": false,
              "latest_failed": false,
              "rscnn_ready": true,
              "description": ""
            }
          }
        },
        {
          "collection_id": "2dbfe553-7b7d-43f8-95b0-041e46a780a1",
          "name": "my-second-collection",
          "description": "Another description to help identify this collection",
          "created": "2019-01-14T17:02:00.335456Z",
          "updated": "2019-01-14T17:02:00.335456Z",
          "image_count": 0,
          "training_status": {
            "objects": {
              "ready": false,
              "in_progress": false,
              "data_changed": false,
              "latest_failed": false,
              "rscnn_ready": false,
              "description": ""
            }
          }
        }
      ]
    }
  • {
      "collections": [
        {
          "collection_id": "5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7",
          "name": "my-collection",
          "description": "A description of my collection",
          "created": "2019-01-14T16:22:27.776288Z",
          "updated": "2019-01-14T16:22:27.776288Z",
          "image_count": 0,
          "training_status": {
            "objects": {
              "ready": true,
              "in_progress": false,
              "data_changed": false,
              "latest_failed": false,
              "rscnn_ready": true,
              "description": ""
            }
          }
        },
        {
          "collection_id": "2dbfe553-7b7d-43f8-95b0-041e46a780a1",
          "name": "my-second-collection",
          "description": "Another description to help identify this collection",
          "created": "2019-01-14T17:02:00.335456Z",
          "updated": "2019-01-14T17:02:00.335456Z",
          "image_count": 0,
          "training_status": {
            "objects": {
              "ready": false,
              "in_progress": false,
              "data_changed": false,
              "latest_failed": false,
              "rscnn_ready": false,
              "description": ""
            }
          }
        }
      ]
    }

Get collection details

Get details of one collection.

Get details of one collection.

Get details of one collection.

Get details of one collection.

Get details of one collection.

Get details of one collection.

Get details of one collection.

Get details of one collection.

Get details of one collection.

GET /v4/collections/{collection_id}
(visualRecognition *VisualRecognitionV4) GetCollection(getCollectionOptions *GetCollectionOptions) (result *Collection, response *core.DetailedResponse, err error)
(visualRecognition *VisualRecognitionV4) GetCollectionWithContext(ctx context.Context, getCollectionOptions *GetCollectionOptions) (result *Collection, response *core.DetailedResponse, err error)
ServiceCall<Collection> getCollection(GetCollectionOptions getCollectionOptions)
getCollection(params)
get_collection(self,
        collection_id: str,
        **kwargs
    ) -> DetailedResponse
get_collection(collection_id:)
func getCollection(
    collectionID: String,
    headers: [String: String]? = nil,
    completionHandler: @escaping (WatsonResponse<Collection>?, WatsonError?) -> Void)
GetCollection(string collectionId)
GetCollection(Callback<Collection> callback, string collectionId)

Request

Instantiate the GetCollectionOptions struct and set the fields to provide parameter values for the GetCollection method.

Use the GetCollectionOptions.Builder to create a GetCollectionOptions object that contains the parameter values for the getCollection method.

Path Parameters

  • The identifier of the collection.

Query Parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

WithContext method only

The GetCollection options.

The getCollection options.

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

  • The identifier of the collection.

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

  • The identifier of the collection.

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

  • The identifier of the collection.

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

  • The identifier of the collection.

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

  • The identifier of the collection.

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

  • The identifier of the collection.

  • curl -u "apikey:{apikey}" "{url}/v4/collections/5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7?version=2019-02-11"
  • IamAuthenticator authenticator = new IamAuthenticator(
        apikey: "{apikey}"
        );
    
    VisualRecognitionService visualRecognition = new VisualRecognitionService("2019-02-11", authenticator);
    visualRecognition.SetServiceUrl("{url}");
    
    var result = visualRecognition.GetCollection(
        collectionId: "5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7"
        );
    
    Console.WriteLine(result.Response);
  • package main
    
    import (
      "encoding/json"
      "fmt"
      "github.com/IBM/go-sdk-core/core"
      "github.com/watson-developer-cloud/go-sdk/v2/visualrecognitionv4"
    )
    
    func main() {
      authenticator := &core.IamAuthenticator{
        ApiKey: "{apikey}",
      }
    
      options := &visualrecognitionv4.VisualRecognitionV4Options{
        Version: "2019-02-11",
        Authenticator: authenticator,
      }
    
      visualRecognition, visualRecognitionErr := visualrecognitionv4.NewVisualRecognitionV4(options)
    
      if visualRecognitionErr != nil {
        panic(visualRecognitionErr)
      }
    
      visualRecognition.SetServiceURL("{url}")
    
      result,_, responseErr := visualRecognition.GetCollection(
        &visualrecognitionv4.GetCollectionOptions{
           CollectionID: core.StringPtr("5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7"),
        },
      )
      if responseErr != nil {
        panic(responseErr)
      }
      b, _ := json.MarshalIndent(result, "", "  ")
      fmt.Println(string(b))
    }
  • IamAuthenticator authenticator = new IamAuthenticator("{apikey}");
    VisualRecognition visualRecognition = new VisualRecognition("2019-02-11", authenticator);
    visualRecognition.setServiceUrl("{url}");
    
    GetCollectionOptions options = new GetCollectionOptions.Builder()
      .collectionId("5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7")
      .build();
    
    Collection response = visualRecognition.getCollection(options).execute().getResult();
    System.out.println(response);
  • const fs = require('fs');
    const VisualRecognitionV4 = require('ibm-watson/visual-recognition/v4');
    const { IamAuthenticator } = require('ibm-watson/auth');
    
    const visualRecognition = new VisualRecognitionV4({
      version: '2019-02-11',
      authenticator: new IamAuthenticator({
        apikey: '{apikey}'
      }),
      serviceUrl: '{url}',
    });
    
    const params = {
      collectionId: '5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7',
    };
    
    visualRecognition.getCollection(params)
      .then(response => {
        console.log(JSON.stringify(response.result, null, 2));
      })
      .catch(err => {
        console.log('error: ', err);
      });
    
  • import json
    from ibm_watson import VisualRecognitionV4
    from ibm_cloud_sdk_core.authenticators import IAMAuthenticator
    
    authenticator = IAMAuthenticator('{apikey}')
    visual_recognition = VisualRecognitionV4(
        version='2019-02-11',
        authenticator=authenticator
    )
    
    visual_recognition.set_service_url('{url}')
    
    result = visual_recognition.get_collection(
        collection_id='5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7').get_result()
    print(json.dumps(result, indent=2))
  • require "json"
    require "ibm_watson/authenticators"
    require "ibm_watson/visual_recognition_v4"
    include IBMWatson
    
    authenticator = Authenticators::IamAuthenticator.new(
      apikey: "{apikey}"
    )
    visual_recognition = VisualRecognitionV4.new(
      version: "2019-02-11",
      authenticator: authenticator
    )
    visual_recognition.service_url = "{url}"
    
    result = visual_recognition.get_collection(
      collection_id: "5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7"
    ).result
    
    puts JSON.pretty_generate(result)
  • let authenticator = WatsonIAMAuthenticator(apiKey: "{apikey}")
    
    let visualRecognition = VisualRecognition(version: "2019-02-11", authenticator: authenticator)
    visualRecognition.serviceURL = "{url}"
    
    visualRecognition.getCollection(collectionID: "5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7") { response, error in
      
      guard let result = response?.result else {
        print(error?.localizedDescription ?? "unknown error")
        return
      }
      
      print(result)
    }
  • var authenticator = new IamAuthenticator(
        apikey: "{apikey}"
    );
    
    while (!authenticator.CanAuthenticate())
        yield return null;
    
    var visualRecognition = new VisualRecognitionService("2019-02-11", authenticator);
    visualRecognition.SetServiceUrl("{url}");
    
    Collection collection = null;
    visualRecognition.GetCollection(
        callback: (DetailedResponse<Collection> response, IBMError error) =>
        {
            Log.Debug("VisualRecognitionServiceV4", "GetCollection result: {0}", response.Response);
            collection = response.Result;
        },
        collectionId: "5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7"
    );
    
    while (collection == null)
        yield return null;

Response

Details about a collection.

Details about a collection.

Details about a collection.

Details about a collection.

Details about a collection.

Details about a collection.

Details about a collection.

Details about a collection.

Details about a collection.

Status Code

  • Collection details

  • The specified resource was not found.

Example responses
  • {
      "collection_id": "5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7",
      "name": "my-collection",
      "description": "A description of my collection",
      "created": "2019-01-14T16:22:27.77628825Z",
      "updated": "2019-01-14T16:22:27.77628825Z",
      "image_count": 0,
      "training_status": {
        "objects": {
          "ready": true,
          "in_progress": false,
          "data_changed": false,
          "latest_failed": false,
          "rscnn_ready": true,
          "description": ""
        }
      }
    }
  • {
      "collection_id": "5826c5ec-6f86-44b1-ab2b-cca6c75f2fc7",
      "name": "my-collection",
      "description": "A description of my collection",
      "created": "2019-01-14T16:22:27.77628825Z",
      "updated": "2019-01-14T16:22:27.77628825Z",
      "image_count": 0,
      "training_status": {
        "objects": {
          "ready": true,
          "in_progress": false,
          "data_changed": false,
          "latest_failed": false,
          "rscnn_ready": true,
          "description": ""
        }
      }
    }

Update a collection

Update the name or description of a collection.

Encode the name and description in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Update the name or description of a collection.

Encode the name and description in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Update the name or description of a collection.

Encode the name and description in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Update the name or description of a collection.

Encode the name and description in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Update the name or description of a collection.

Encode the name and description in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Update the name or description of a collection.

Encode the name and description in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Update the name or description of a collection.

Encode the name and description in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Update the name or description of a collection.

Encode the name and description in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Update the name or description of a collection.

Encode the name and description in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

POST /v4/collections/{collection_id}
(visualRecognition *VisualRecognitionV4) UpdateCollection(updateCollectionOptions *UpdateCollectionOptions) (result *Collection, response *core.DetailedResponse, err error)
(visualRecognition *VisualRecognitionV4) UpdateCollectionWithContext(ctx context.Context, updateCollectionOptions *UpdateCollectionOptions) (result *Collection, response *core.DetailedResponse, err error)
ServiceCall<Collection> updateCollection(UpdateCollectionOptions updateCollectionOptions)
updateCollection(params)
update_collection(self,
        collection_id: str,
        *,
        name: str = None,
        description: str = None,
        training_status: 'TrainingStatus' = None,
        **kwargs
    ) -> DetailedResponse
update_collection(collection_id:, name: nil, description: nil, training_status: nil)
func updateCollection(
    collectionID: String,
    name: String? = nil,
    description: String? = nil,
    trainingStatus: TrainingStatus? = nil,
    headers: [String: String]? = nil,
    completionHandler: @escaping (WatsonResponse<Collection>?, WatsonError?) -> Void)
UpdateCollection(string collectionId, string name = null, string description = null, TrainingStatus trainingStatus = null)
UpdateCollection(Callback<Collection> callback, string collectionId, string name = null, string description = null, TrainingStatus trainingStatus = null)

Request

Instantiate the UpdateCollectionOptions struct and set the fields to provide parameter values for the UpdateCollection method.

Use the UpdateCollectionOptions.Builder to create a UpdateCollectionOptions object that contains the parameter values for the updateCollection method.

Path Parameters

  • The identifier of the collection.

Query Parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

The updated collection.

WithContext method only

The UpdateCollection options.

The updateCollection options.

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

  • The identifier of the collection.

  • The name of the collection. The name can contain alphanumeric, underscore, hyphen, and dot characters. It cannot begin with the reserved prefix sys-.

    Possible values: length ≤ 64, Value must match regular expression /^(?!sys-)[\\pL\\pN_\\-.]*$/

  • The description of the collection.

  • Training status information for the collection.

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

  • The identifier of the collection.

  • The name of the collection. The name can contain alphanumeric, underscore, hyphen, and dot characters. It cannot begin with the reserved prefix sys-.

    Possible values: length ≤ 64, Value must match regular expression /^(?!sys-)[\\pL\\pN_\\-.]*$/

  • The description of the collection.

  • Training status information for the collection.

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

  • The identifier of the collection.

  • The name of the collection. The name can contain alphanumeric, underscore, hyphen, and dot characters. It cannot begin with the reserved prefix sys-.

    Possible values: length ≤ 64, Value must match regular expression /^(?!sys-)[\\pL\\pN_\\-.]*$/

  • The description of the collection.

  • Training status information for the collection.

parameters

  • Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is 2019-02-11.

  • The identifier of the collection.

  • The name of the collection. The name can contain alphanumeric, underscore, hyphen, and dot characters. It cannot begin with the reserved prefix sys-.

    Possible values: length ≤ 64, Value must match regular expression /^(?!sys-)[\\pL\\pN_\\-.]*$/

  • The description of the collection.

  • Training status information for the collection.

parameters