Introduction

The IBM Watson™ Visual Recognition service uses deep learning algorithms to identify scenes, objects, and faces in images you upload to the service. You can create and train a custom classifier to identify subjects that suit your needs.

Beginning with version 4.0.0, the Node SDK returns a Promise for all methods when a callback is not specified.

The package location moved to ibm-watson. It remains available at watson-developer-cloud but is not updated there. Use ibm-watson to stay up to date.

The IBM Watson Unity SDK has the following requirements.

  • The SDK requires Unity version 2018.2 or later to support TLS 1.2.
    • Set the project settings for both the Scripting Runtime Version and the Api Compatibility Level to .NET 4.x Equivalent.
    • For more information, see TLS 1.0 support.
  • The SDK does not support the WebGL projects. Change your build settings to any platform except WebGL.

For information about how to install and configure the SDK and SDK Core, see https://github.com/watson-developer-cloud/unity-sdk.

The code examples on this tab use the client library that is provided for Java.

Maven

<dependency>
  <groupId>com.ibm.watson</groupId>
  <artifactId>ibm-watson</artifactId>
  <version>7.3.0</version>
</dependency>

Gradle

compile 'com.ibm.watson:ibm-watson:7.3.0'

GitHub

The code examples on this tab use the client library that is provided for Node.js.

Installation

npm install ibm-watson

GitHub

The code examples on this tab use the client library that is provided for Python.

Installation

pip install --upgrade "ibm-watson>=3.3.0"

GitHub

The code examples on this tab use the client library that is provided for Ruby.

Installation

gem install ibm_watson

GitHub

The code examples on this tab use the client library that is provided for Go.

go get -u github.com/watson-developer-cloud/go-sdk/...

GitHub

The code examples on this tab use the client library that is provided for Swift.

Cocoapods

pod 'IBMWatsonVisualRecognitionV3', '~> 2.2.0'

Carthage

github "watson-developer-cloud/swift-sdk" ~> 2.2.0

Swift Package Manager

.package(url: "https://github.com/watson-developer-cloud/swift-sdk", from: "2.2.0")

GitHub

The code examples on this tab use the client library that is provided for .NET Standard.

Package Manager

Install-Package IBM.Watson.VisualRecognition.v3 -Version 3.4.0

.NET CLI

dotnet add package IBM.Watson.VisualRecognition.v3 -version 3.4.0

PackageReference

<PackageReference Include="IBM.Watson.VisualRecognition.v3" Version="3.4.0" />

GitHub

The code examples on this tab use the client library that is provided for Unity.

Github

Authentication

You authenticate to the API by using IAM. You can pass either a bearer token in an Authorization header or an apikey. Tokens support authenticated requests without embedding service credentials in every call. API keys use basic authentication. Learn more about IAM.

If you pass in an API key, use apikey for the username and the value of the API key as the password.

IBM Cloud Dedicated instances authenticate by providing the username and password for the service instance.

If you pass in the value of the API key, the SDK manages the lifecycle of the tokens. If you pass a token, you maintain the token lifecycle. Learn more about IAM authentication with the SDK.

If you pass in the value of the API key, the SDK manages the lifecycle of the tokens. If you pass a token, you maintain the token lifecycle. Learn more about IAM authentication with the SDK.

If you pass in the value of the API key, the SDK manages the lifecycle of the tokens. If you pass a token, you maintain the token lifecycle. Learn more about IAM authentication with the SDK.

If you pass in the value of the API key, the SDK manages the lifecycle of the tokens. If you pass a token, you maintain the token lifecycle. Learn more about IAM authentication with the SDK.

If you pass in the value of the API key, the SDK manages the lifecycle of the tokens. If you pass a token, you maintain the token lifecycle. Learn more about IAM authentication with the SDK.

If you pass in the value of the API key, the SDK manages the lifecycle of the tokens. If you pass a token, you maintain the token lifecycle. Learn more about IAM authentication with the SDK.

IAM authentication. Replace {apikey} with your service credentials.

curl -u "apikey:{apikey}" -X {request_method} "https://gateway.watsonplatform.net/visual-recognition/api/v3/{method}"

IBM Cloud Dedicated only. Replace {username}, {password}, and {url} with your service credentials.

curl -u "{username}:{password}" -X {request_method} "{url}/{method}"

SDK managing the IAM token. Replace {apikey} and {version}.

IamOptions options = new IamOptions.Builder()
    .apiKey("{apikey}")
    .build();
VisualRecognition visualRecognition = new VisualRecognition("{version}", options);

SDK managing the IAM token. Replace {apikey} and {version}.

const VisualRecognitionV3 = require('ibm-watson/visual-recognition/v3');

const visualRecognition = new VisualRecognitionV3({
  version: '{version}',
  iam_apikey: '{apikey}'
});

SDK managing the IAM token. Replace {apikey} and {version}.

from ibm_watson import VisualRecognitionV3

visual_recognition = VisualRecognitionV3(
    version='{version}',
    iam_apikey='{apikey}'
)

SDK managing the IAM token. Replace {apikey} and {version}.

require "ibm_watson"

visual_recognition = IBMWatson::VisualRecognitionV3.new(
  version: "{version}",
  iam_apikey: "{apikey}"
)

SDK managing the IAM token. Replace {apikey} and {version}.

import "github.com/watson-developer-cloud/go-sdk/visualrecognitionv3"

visualRecognition, visualRecognitionErr := visualrecognitionv3.NewVisualRecognitionV3(&visualrecognitionv3.VisualRecognitionV3Options{
  Version: "{version}",
  IAMApiKey: "{apikey}",
})

SDK managing the IAM token. Replace {apikey} and {version}.

let visualRecognition = VisualRecognition(version: "{version}", apiKey: "{apikey}")

SDK managing the IAM token. Replace {apikey} and {version}.

IamConfig config = new IamConfig(
    apikey: "{apikey}"
    );

VisualRecognition visualRecognition = new VisualRecognition("{version}", config);

SDK managing the IAM token. Replace {apikey} and {version}.

IamTokenOptions tokenOptions = new IamTokenOptions()
{
    IamApiKey = "{apikey}"
};
Credentials credentials = new Credentials(tokenOptions);
while (!credentials.HasTokenData())
{
    yield return null;
}

VisualRecognitionService visualRecognition = new VisualRecognitionService("{version}", credentials);

Service endpoint

The Visual Recognition v3 API is hosted only in the Dallas location and has a single service endpoint.

API endpoint

https://gateway.watsonplatform.net/visual-recognition/api

The URL is different when you use IBM Cloud Dedicated

Disabling SSL verification

All Watson services use Secure Sockets Layer (SSL) (or Transport Layer Security (TLS)) for secure connections between the client and server. The connection is verified against the local certificate store to ensure authentication, integrity, and confidentiality.

If you use a self-signed certificate, you need to disable SSL verification to make a successful connection.

Enabling SSL verification is highly recommended. Disabling SSL jeopardizes the security of the connection and data. Disable SSL only if absolutely necessary, and take steps to enable SSL as soon as possible.

To disable SSL verification for a curl request, use the --insecure (-k) option with the request.

To disable SSL verification, create an HttpConfigOptions object and set the disableSslVerification property to true. Then pass the object to the service instance by using the configureClient method.

To disable SSL verification, set the disable_ssl_verification parameter to true when you create the service instance.

To disable SSL verification, call the disable_SSL_verification method on the service instance.

To disable SSL verification, call the configure_http_client method on the service instance and set the disable_ssl parameter to true.

To disable SSL verification, call the DisableSSLVerification method on the service instance.

To disable SSL verification, call the disableSSLVerification() method on the service instance. You cannot disable SSL verification on Linux.

To disable SSL verification, set the DisableSslVerification method to true on the service instance.

To disable SSL verification, call the DisableSslVerification method on the service instance.

Example that disables SSL verification

IAM authentication. Replace {apikey} with your service credentials.

curl -k -u "apikey:{apikey}" -X {request_method} "https://gateway.watsonplatform.net/visual-recognition/api/v3/{method}"

Example that disables SSL verification

IamOptions options = new IamOptions.Builder()
    .apiKey("{apikey}")
    .build();
VisualRecognition visualRecognition = new VisualRecognition("{version}", options);

HttpConfigOptions configOptions = new HttpConfigOptions.Builder()
  .disableSslVerification(true)
  .build();
visualRecognition.configureClient(configOptions);

Example that disables SSL verification

const VisualRecognitionV3 = require('ibm-watson/visual-recognition/v3');

const visualRecognition = new VisualRecognitionV3({
  version: '{version}',
  iam_apikey: '{apikey}'
  disable_ssl_verification: true,
});

Example that disables SSL verification

from ibm_watson import VisualRecognitionV3

visual_recognition = VisualRecognitionV3(
    version='{version}',
    iam_apikey='{apikey}'
)
visual_recognition.disable_SSL_verification()

Example that disables SSL verification

require "ibm_watson"

visual_recognition = IBMWatson::VisualRecognitionV3.new(
  version: "{version}",
  iam_apikey: "{apikey}"
)
visual_recognition.configure_http_client(disable_ssl: true)

Example that disables SSL verification

import "github.com/watson-developer-cloud/go-sdk/visualrecognitionv3"

visualRecognition, visualRecognitionErr := visualrecognitionv3.NewVisualRecognitionV3(&visualrecognitionv3.VisualRecognitionV3Options{
  Version: "{version}",
  IAMApiKey: "{apikey}",
})
visualRecognition.DisableSSLVerification()

Example that disables SSL verification


let visualRecognition = VisualRecognition(version: "{version}", apiKey: "{apikey}")
visualRecognition.disableSSLVerification()

Example that disables SSL verification

IamConfig config = new IamConfig(
    apikey: "{apikey}"
    );

VisualRecognition visualRecognition = new VisualRecognition("{version}", config);

Example that disables SSL verification

IamTokenOptions tokenOptions = new IamTokenOptions()
{
    IamApiKey = "{apikey}"
};
Credentials credentials = new Credentials(tokenOptions);
while (!credentials.HasTokenData())
{
    yield return null;
}

VisualRecognitionService visualRecognition = new VisualRecognitionService("{version}", credentials);

visualRecognition.DisableSslVerification = true;

Versioning

API requests require a version parameter that takes a date in the format version=YYYY-MM-DD. When we change the API in a backwards-incompatible way, we release a new version date.

Send the version parameter with every API request. The service uses the API version for the date you specify, or the most recent version before that date. Don't default to the current date. Instead, specify a date that matches a version that is compatible with your app, and don't change it until your app is ready for a later version.

Specify the version to use on API requests with the version parameter when you create the service instance. The service uses the API version for the date you specify, or the most recent version before that date. Don't default to the current date. Instead, specify a date that matches a version that is compatible with your app, and don't change it until your app is ready for a later version.

This documentation describes the current version of Visual Recognition, 2018-03-19. In some cases, differences in earlier versions are noted in the descriptions of parameters and response models.

Error handling

The Visual Recognition service uses standard HTTP response codes to indicate whether a method completed successfully. HTTP response codes in the 2xx range indicate success. A response in the 4xx range is some sort of failure, and a response in the 5xx range usually indicates an internal system error that cannot be resolved by the user. Response codes are listed with the method.

ErrorResponse

Name Description
code
integer
The HTTP response code.
error
string
General description of an error.

ErrorHTML

Name Description
Error
string
HTML description of the error.

ErrorInfo

Information about what might have caused a failure, such as an image that is too large. Not returned when there is no error.

Name Description
code
integer
HTTP response code.
description
string
Human-readable error description. For example, File size limit exceeded.
error_id
string
Codified error string. For example, limit_exceeded.

The Java SDK generates an exception for any unsuccessful method invocation. All methods that accept an argument can also throw an IllegalArgumentException.

Exception Description
IllegalArgumentException An illegal argument was passed to the method.

When the Java SDK receives an error response from the Visual Recognition service, it generates an exception from the com.ibm.watson.developer_cloud.service.exception package. All service exceptions contain the following fields.

Field Description
statusCode The HTTP response code that is returned.
message A message that describes the error.

When the Node SDK receives an error response from the Visual Recognition service, it creates an Error object with information that describes the error that occurred. This error object is passed as the first parameter to the callback function for the method. The contents of the error object are as shown in the following table.

Error

Field Description
code The HTTP response code that is returned.
message A message that describes the error.

The Python SDK generates an exception for any unsuccessful method invocation. When the Python SDK receives an error response from the Visual Recognition service, it generates an ApiException that contains the following fields.

Field Description
code The HTTP response code that is returned.
message A message that describes the error.
info A dictionary of additional information about the error.

When the Ruby SDK receives an error response from the Visual Recognition service, it generates an ApiException that contains the following fields.

Field Description
code The HTTP response code that is returned.
message A message that describes the error.
info A dictionary of additional information about the error.

The Go SDK generates an error for any unsuccessful service instantiation and method invocation. You can check for the error immediately. The contents of the error object are as shown in the following table.

Error

Field Description
code The HTTP response code that is returned.
message A message that describes the error.

The Swift SDK returns a WatsonError in the completionHandler any unsuccessful method invocation. This error type is an enum that conforms to LocalizedError and contains an errorDescription property that returns an error message. Some of the WatsonError cases contain associated values that reveal more information about the error.

Field Description
errorDescription A message that describes the error.

When the .NET Standard SDK receives an error response from the Visual Recognition service, it generates a ServiceResponseException that contains the following fields.

Field Description
Message A message that describes the error.
CodeDescription The HTTP response code that is returned.

When the Unity SDK receives an error response from the Visual Recognition service, it generates an IBMError that contains the following fields.

Field Description
Url The URL that generated the error.
StatusCode The HTTP response code returned.
ErrorMessage A message that describes the error.
Response The contents of the response from the server.
ResponseHeaders A dictionary of headers returned by the request.

Example error handling

try {
  // Invoke a Visual Recognition method
} catch (NotFoundException e) {
  // Handle Not Found (404) exception
} catch (RequestTooLargeException e) {
  // Handle Request Too Large (413) exception
} catch (ServiceResponseException e) {
  // Base class for all exceptions caused by error responses from the service
  System.out.println("Service returned status code "
    + e.getStatusCode() + ": " + e.getMessage());
}

Example error handling

visualRecognition.method(params)
  .catch(err => {
    console.log('error:', err);
  });

Example error handling

from ibm_watson import ApiException
try:
    # Invoke a Visual Recognition method
except ApiException as ex:
    print "Method failed with status code " + str(ex.code) + ": " + ex.message

Example error handling

require "ibm_watson"
begin
  # Invoke a Visual Recognition method
rescue IBMWatson::ApiException => ex
  print "Method failed with status code #{ex.code}: #{ex.error}"
end

Example error handling

import "github.com/watson-developer-cloud/go-sdk/visualrecognitionv3"

// Instantiate a service
visualRecognition, visualRecognitionErr := visualrecognitionv3.NewVisualRecognitionV3(&visualRecognitionv3.VisualRecognitionV3Options{})

// Check for errors
if visualRecognitionErr != nil {
  panic(visualRecognitionErr)
}

// Call a method
response, responseErr := visualRecognition.methodName(&methodOptions)

// Check for errors
if responseErr != nil {
  panic(responseErr)
}

Example error handling

visualRecognition.method() {
  response, error in

  if let error = error {
    switch error {
    case let .http(statusCode, message, metadata):
      switch statusCode {
      case .some(404):
        // Handle Not Found (404) exception
        print("Not found")
      case .some(413):
        // Handle Request Too Large (413) exception
        print("Payload too large")
      default:
        if let statusCode = statusCode {
          print("Error - code: \(statusCode), \(message ?? "")")
        }
      }
    default:
      print(error.localizedDescription)
    }
    return
  }

  guard let result = response?.result else {
    print(error?.localizedDescription ?? "unknown error")
    return
  }

  print(result)
}

Example error handling

try
{
    // Invoke a Watson visualRecognition method
}
catch(ServiceResponseException e)
{
    Console.WriteLine("Error: " + e.Message);
}
catch (Exception e)
{
    Console.WriteLine("Error: " + e.Message);
}

Example error handling

// Invoke a method
visualRecognition.MethodName(Callback, Parameters);

// Check for errors
private void Callback(DetailedResponse<ExampleResponse> response, IBMError error)
{
    if (error == null)
    {
        Log.Debug("ExampleCallback", "Response received: {0}", response.Response);
    }
    else
    {
        Log.Debug("ExampleCallback", "Error received: {0}, {1}, {3}", error.StatusCode, error.ErrorMessage, error.Response);
    }
}

Data handling

Additional headers

Some Watson services accept special parameters in headers that are passed with the request.

You can pass request header parameters in all requests or in a single request to the service.

To pass a request header, use the --header (-H) option with a curl request.

To pass header parameters with every request, use the setDefaultHeaders method of the service object. See Data collection for an example use of this method.

To pass header parameters in a single request, use the addHeader method as a modifier on the request before you execute it.

To pass header parameters with every request, specify the headers parameter when you create the service object. See Data collection for an example use of this method.

To pass header parameters in a single request, use the headers method as a modifier on the request before you execute it.

To pass header parameters with every request, specify the set_default_headers method of the service object. See Data collection for an example use of this method.

To pass header parameters in a single request, include headers as a dict in the request.

To pass header parameters with every request, specify the add_default_headers method of the service object. See Data collection for an example use of this method.

To pass header parameters in a single request, specify the headers method as a chainable method in the request.

To pass header parameters with every request, specify the SetDefaultHeaders method of the service object. See Data collection for an example use of this method.

To pass header parameters in a single request, specify the Headers as a map in the request.

To pass header parameters with every request, add them to the defaultHeaders property of the service object. See Data collection for an example use of this method.

To pass header parameters in a single request, pass the headers parameter to the request method.

To pass header parameters in a single request, use the WithHeader() method as a modifier on the request before you execute it.

To pass header parameters in a single request, use the WithHeader() method as a modifier on the request before you execute it.

Example header parameter in a request

curl -u "apikey:{apikey}" -X {request_method} --header "Request-Header: {header_value}" "{url}/{method}"

Example header parameter in a request

ReturnType returnValue = visualRecognition.methodName(parameters)
  .addHeader("Custom-Header", "{header_value}")
  .execute();

Example header parameter in a request

const parameters = {
  {parameters}
};

visualRecognition.methodName(
  parameters,
  headers: {
    'Custom-Header': '{header_value}'
  })
   .then(result => {
    console.log(response);
  })
  .catch(err => {
    console.log('error:', err);
  });

Example header parameter in a request

response = visual_recognition.methodName(
    parameters,
    headers = {
        'Custom-Header': '{header_value}'
    })

Example header parameter in a request

response = visual_recognition.headers(
  "Custom-Header" => "{header_value}"
).methodName(parameters)

Example header parameter in a request

response, _ := visualrecognitionv3.methodName(
  &methodOptions{
    Headers: map[string]string{
      "Accept": "application/json",
    },
  },
)

Example header parameter in a request

let customHeader: [String: String] = ["Custom-Header": "{header_value}"]
visualRecognition.methodName(parameters, headers: customHeader) {
  response, error in
}

Example header parameter in a request

IamConfig iamConfig = new IamConfig(
    apikey: "{apikey}"
    );

VisualRecognition visualRecognition = new VisualRecognition("{version}", iamConfig);
visualRecognition.WithHeader("Custom-Header", "header_value");

Example header parameter in a request

IamTokenOptions tokenOptions = new IamTokenOptions()
{
    IamApiKey = "{apikey}"
};
Credentials credentials = new Credentials(tokenOptions);
while (!credentials.HasTokenData())
{
    yield return null;
}

VisualRecognitionService visualRecognition = new VisualRecognitionService("{version}", credentials);
visualRecognition.WithHeader("Custom-Header", "header_value");

Response details

The Visual Recognition service might return information to the application in response headers.

To access all response headers that the service returns, include the --include (-i) option with a curl request. To see detailed response data for the request, including request headers, response headers, and additional debugging information, include the --verbose (-v) option with the request.

Example request to access response headers

curl -u "apikey:{apikey}" -X {request_method} --include "{url}/{method}"

To access information in the response headers, use one of the request methods that returns details with the response: executeWithDetails(), enqueueWithDetails(), or rxWithDetails(). These methods return a Response<T> object, where T is the expected response model. Use the getResult() method to access the response object for the method, and use the getHeaders() method to access information in response headers.

Example request to access response headers

Response<ReturnType> response = visualRecognition.methodName(parameters)
  .executeWithDetails();
// Access response from methodName
ReturnType returnValue = response.getResult();
// Access information in response headers
Headers responseHeaders = response.getHeaders();

To access information in the response headers, add the return_response parameter set to true and specify the headers attribute on the response object that is returned by the method. To access information in the response object, use the following properties.

Property Description
result Returns the response for the service-specific method.
headers Returns the response header information.
status Returns the HTTP status code.

Example request to access response headers

const parameters = {
  {parameters}
};

parameters.return_response = true;

visualRecognition.methodName(parameters)
  .then(response => {
    console.log(response.headers);
  })
  .catch(err => {
    console.log('error:', err);
  });

The return value from all service methods is a DetailedResponse object. To access information in the result object or response headers, use the following methods.

DetailedResponse

Method Description
get_result() Returns the response for the service-specific method.
get_headers() Returns the response header information.
get_status_code() Returns the HTTP status code.

Example request to access response headers

visual_recognition.set_detailed_response(True)
response = visual_recognition.methodName(parameters)
# Access response from methodName
print(json.dumps(response.get_result(), indent=2))
# Access information in response headers
print(response.get_headers())
# Access HTTP response status
print(response.get_status_code())

The return value from all service methods is a DetailedResponse object. To access information in the response object, use the following properties.

DetailedResponse

Property Description
result Returns the response for the service-specific method.
headers Returns the response header information.
status Returns the HTTP status code.

Example request to access response headers

response = visual_recognition.methodName(parameters)
# Access response from methodName
print response.result
# Access information in response headers
print response.headers
# Access HTTP response status
print response.status

The return value from all service methods is a DetailedResponse object. To access information in the response object or response headers, use the following methods.

DetailedResponse

Method Description
GetResult() Returns the response for the service-specific method.
GetHeaders() Returns the response header information.
GetStatusCode() Returns the HTTP status code.

Example request to access response headers

import "github.com/IBM/go-sdk-core/core"
response, _ := visualrecognitionv3.methodName(&methodOptions{})

// Access result
core.PrettyPrint(response.GetResult(), "Result ")

// Access response headers
core.PrettyPrint(response.GetHeaders(), "Headers ")

// Access status code
core.PrettyPrint(response.GetStatusCode(), "Status Code ")

All response data is available in the WatsonResponse<T> object that is returned in each method's completionHandler.

Example request to access response headers

visualRecognition.methodName(parameters) {
  response, error in

  guard let result = response?.result else {
    print(error?.localizedDescription ?? "unknown error")
    return
  }
  print(result) // The data returned by the service
  print(response?.statusCode)
  print(response?.headers)
}

The response contains fields for response headers, response JSON, and the status code.

DetailedResponse

Property Description
Result Returns the result for the service-specific method.
Response Returns the raw JSON response for the service-specific method.
Headers Returns the response header information.
StatusCode Returns the HTTP status code.

Example request to access response headers

var results = visualRecognition.MethodName(parameters);

var result = results.Result;            //  The result object
var responseHeaders = results.Headers;  //  The response headers
var responseJson = results.Response;    //  The raw response JSON
var statusCode = results.StatusCode;    //  The response status code

The response contains fields for response headers, response JSON, and the status code.

DetailedResponse

Property Description
Result Returns the result for the service-specific method.
Response Returns the raw JSON response for the service-specific method.
Headers Returns the response header information.
StatusCode Returns the HTTP status code.

Example request to access response headers

private void Example()
{
    visualRecognition.MethodName(Callback, Parameters);
}

private void Callback(DetailedResponse<ResponseType> response, IBMError error)
{
    var result = response.Result;                 //  The result object
    var responseHeaders = response.Headers;       //  The response headers
    var responseJson = reresponsesults.Response;  //  The raw response JSON
    var statusCode = response.StatusCode;         //  The response status code
}

Data labels

You can remove customer data if you associate the customer and the data when you send the information to a service. First, you label the data with a customer ID, and then you can delete the data by the ID.

  • Use the X-Watson-Metadata header to associate a customer ID with the data. By adding a customer ID to a request, you indicate that it contains data that belongs to that customer.

    Specify a random or generic string for the customer ID. Do not include personal data, such as an email address. Pass the string customer_id={id} as the argument of the header.

  • Use the Delete labeled data method to remove data that is associated with a customer ID.

Labeling data is used only by methods that accept customer data. For more information about Visual Recognition and labeling data, see Information security.

For more information about how to pass headers, see Additional headers.

Data collection

By default, all Watson services log requests and their results. Logging is done only to improve the services for future users. The logged data is not shared or made public.

To prevent IBM usage of your data for an API request, set the X-Watson-Learning-Opt-Out header parameter to true.

If you set the parameter to true when you create a classifier, training images are not stored. Save your training images locally.

You must set the header on each request that you do not want IBM to access for general service improvements.

You can set the header by using the setDefaultHeaders method of the service object.

You can set the header by using the headers parameter when you create the service object.

You can set the header by using the set_default_headers method of the service object.

You can set the header by using the add_default_headers method of the service object.

You can set the header by using the SetDefaultHeaders method of the service object.

You can set the header by adding it to the defaultHeaders property of the service object.

You can set the header by using the WithHeader() method of the service object.

Example request

curl -u "apikey:{apikey}" -H "X-Watson-Learning-Opt-Out: true" "{url}/{method}"

Example request

Map<String, String> headers = new HashMap<String, String>();
headers.put("X-Watson-Learning-Opt-Out", "true");

visualRecognition.setDefaultHeaders(headers);

Example request

const VisualRecognitionV3 = require('ibm-watson/visual-recognition/v3');

const visualRecognition = new VisualRecognitionV3({
  version: '{version}',
  iam_apikey: '{apikey}',
  headers: {
    'X-Watson-Learning-Opt-Out': 'true'
  }
});

Example request

visual_recognition.set_default_headers({'x-watson-learning-opt-out': "true"})

Example request

visual_recognition.add_default_headers(headers: {"x-watson-learning-opt-out" => "true"})

Example request

import "net/http"

headers := http.Header{}
headers.Add("x-watson-learning-opt-out", "true")
visualrecognitionv3.Service.SetDefaultHeaders(headers)

Example request

visualRecognition.defaultHeaders["X-Watson-Learning-Opt-Out"] = "true"

Example request

IamConfig config = new IamConfig(
    apikey: "{apikey}"
    );

VisualRecognition visualRecognition = new VisualRecognition("{version}", config);
visualRecognition.WithHeader("X-Watson-Learning-Opt-Out", "true");

Example request

IamTokenOptions tokenOptions = new IamTokenOptions()
{
    IamApiKey = "{apikey}"
};
Credentials credentials = new Credentials(tokenOptions,);
while (!credentials.HasTokenData())
{
    yield return null;
}

VisualRecognitionService visualRecognition = new VisualRecognitionService("{version}", credentials);
visualRecognition.WithHeader("X-Watson-Learning-Opt-Out", "true");

Synchronous and asynchronous requests

The Java SDK supports both synchronous (blocking) and asynchronous (non-blocking) execution of service methods. All service methods implement the ServiceCall interface.

  • To call a method synchronously, use the execute method of the ServiceCall interface. You can call the execute method directly from an instance of the service.
  • To call a method asynchronously, use the enqueue method of the ServiceCall interface to receive a callback when the response arrives. The ServiceCallback interface of the method's argument provides onResponse and onFailure methods that you override to handle the callback.

The Ruby SDK supports both synchronous (blocking) and asynchronous (non-blocking) execution of service methods. All service methods implement the Concurrent::Async module. When you use the synchronous or asynchronous methods, an IVar object is returned. You access the DetailedResponse object by calling ivar_object.value.

For more information about the Ivar object, see the IVar class docs.

  • To call a method synchronously, either call the method directly or use the .await chainable method of the Concurrent::Async module.

    Calling a method directly (without .await) returns a DetailedResponse object.

  • To call a method asynchronously, use the .async chainable method of the Concurrent::Async module.

You can call the .await and .async methods directly from an instance of the service.

Example synchronous request

ReturnType returnValue = visualRecognition.method(parameters).execute();

Example asynchronous request

visualRecognition.method(parameters).enqueue(new ServiceCallback<ReturnType>() {
  @Override public void onResponse(ReturnType response) {
    . . .
  }
  @Override public void onFailure(Exception e) {
    . . .
  }
});

Example synchronous request

response = visual_recognition.method_name(parameters)

or

response = visual_recognition.await.method_name(parameters)

Example asynchronous request

response = visual_recognition.async.method_name(parameters)

Methods

Classify an image

Classify an image with the built-in or custom classifiers.

GET /v3/classify
Request

Custom Headers

  • The desired language of parts of the response. See the response for details.

    Allowable values: [en,ar,de,es,fr,it,ja,ko,pt-br,zh-cn,zh-tw]

    Default: en

Query Parameters

  • The release date of the version of the API you want to use. Specify dates in YYYY-MM-DD format. The current version is 2018-03-19.

  • The URL of an image (.gif, .jpg, .png, .tif). The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The maximum image size is 10 MB. Redirects are followed, so you can use a shortened URL.

  • The categories of classifiers to apply. The classifier_ids parameter overrides owners, so make sure that classifier_ids is empty.

    • Use IBM to classify against the default general classifier. You get the same result if both classifier_ids and owners parameters are empty.
    • Use me to classify against all your custom classifiers. However, for better performance use classifier_ids to specify the specific custom classifiers to apply.
    • Use both IBM and me to analyze the image against both classifier categories.

    Allowable values: [IBM,me]

  • Which classifiers to apply. Overrides the owners parameter. You can specify both custom and built-in classifier IDs. The built-in default classifier is used if both classifier_ids and owners parameters are empty.

    The following built-in classifier_ids require no training:

    • default: Returns classes from thousands of general tags.
    • food: Enhances specificity and accuracy for images of food items.
    • explicit: Evaluates whether the image might be pornographic.
  • The minimum score a class must have to be displayed in the response. Set the threshold to 0.0 to return all identified classes

    Constraints: 0 ≤ value ≤ 1

    Default: 0.5

            Response

            Results for all images.

            Status Code

            • success

            • Invalid request due to user input, for example:

              • Bad header parameter
              • Invalid output language
              • No input images
              • The size of the image file in the request is larger than the maximum supported size
            • No API key or the key is not valid.

            Example responses

            Classify images

            Classify images with built-in or custom classifiers.

            Classify images with built-in or custom classifiers.

            Classify images with built-in or custom classifiers.

            Classify images with built-in or custom classifiers.

            Classify images with built-in or custom classifiers.

            Classify images with built-in or custom classifiers.

            Classify images with built-in or custom classifiers.

            Classify images with built-in or custom classifiers.

            Classify images with built-in or custom classifiers.

            POST /v3/classify
            (visualRecognition *VisualRecognitionV3) Classify(classifyOptions *ClassifyOptions) (*core.DetailedResponse, error)
            ServiceCall<ClassifiedImages> classify(ClassifyOptions classifyOptions)
            classify(params, [callback()])
            classify(self, images_file=None, images_filename=None, images_file_content_type=None, url=None, threshold=None, owners=None, classifier_ids=None, accept_language=None, **kwargs)
            classify(images_file: nil, images_filename: nil, images_file_content_type: nil, url: nil, threshold: nil, owners: nil, classifier_ids: nil, accept_language: nil)
            func classify(
                imagesFile: Data? = nil,
                imagesFilename: String? = nil,
                imagesFileContentType: String? = nil,
                url: String? = nil,
                threshold: Double? = nil,
                owners: [String]? = nil,
                classifierIDs: [String]? = nil,
                acceptLanguage: String? = nil,
                headers: [String: String]? = nil,
                completionHandler: @escaping (WatsonResponse<ClassifiedImages>?, WatsonError?) -> Void)
            Classify(System.IO.MemoryStream imagesFile = null, string imagesFilename = null, string imagesFileContentType = null, string url = null, float? threshold = null, List<string> owners = null, List<string> classifierIds = null, string acceptLanguage = null)
            Classify(Callback<ClassifiedImages> callback, System.IO.MemoryStream imagesFile = null, string imagesFilename = null, string imagesFileContentType = null, string url = null, float? threshold = null, List<string> owners = null, List<string> classifierIds = null, string acceptLanguage = null)
            Request

            Instantiate the ClassifyOptions struct and set the fields to provide parameter values for the Classify method.

            Use the ClassifyOptions.Builder to create a ClassifyOptions object that contains the parameter values for the classify method.

            Custom Headers

            • The desired language of parts of the response. See the response for details.

              Allowable values: [en,ar,de,es,fr,it,ja,ko,pt-br,zh-cn,zh-tw]

              Default: en

            Query Parameters

            • The release date of the version of the API you want to use. Specify dates in YYYY-MM-DD format. The current version is 2018-03-19.

            Form Parameters

            • An image file (.gif, .jpg, .png, .tif) or .zip file with images. Maximum image size is 10 MB. Include no more than 20 images and limit the .zip file to 100 MB. Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

              You can also include an image with the url parameter.

            • The URL of an image (.gif, .jpg, .png, .tif) to analyze. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The maximum image size is 10 MB.

              You can also include images with the images_file parameter.

            • The minimum score a class must have to be displayed in the response. Set the threshold to 0.0 to return all identified classes.

              Default: 0.5

            • The categories of classifiers to apply. The classifier_ids parameter overrides owners, so make sure that classifier_ids is empty.

              • Use IBM to classify against the default general classifier. You get the same result if both classifier_ids and owners parameters are empty.
              • Use me to classify against all your custom classifiers. However, for better performance use classifier_ids to specify the specific custom classifiers to apply.
              • Use both IBM and me to analyze the image against both classifier categories.
            • Which classifiers to apply. Overrides the owners parameter. You can specify both custom and built-in classifier IDs. The built-in default classifier is used if both classifier_ids and owners parameters are empty.

              The following built-in classifier IDs require no training:

              • default: Returns classes from thousands of general tags.
              • food: Enhances specificity and accuracy for images of food items.
              • explicit: Evaluates whether the image might be pornographic.

            The Classify options.

            The classify options.

            parameters

            • An image file (.gif, .jpg, .png, .tif) or .zip file with images. Maximum image size is 10 MB. Include no more than 20 images and limit the .zip file to 100 MB. Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

              You can also include an image with the url parameter.

            • The filename for images_file.

            • The content type of images_file.

            • The URL of an image (.gif, .jpg, .png, .tif) to analyze. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The maximum image size is 10 MB.

              You can also include images with the images_file parameter.

            • The minimum score a class must have to be displayed in the response. Set the threshold to 0.0 to return all identified classes.

            • The categories of classifiers to apply. The classifier_ids parameter overrides owners, so make sure that classifier_ids is empty.

              • Use IBM to classify against the default general classifier. You get the same result if both classifier_ids and owners parameters are empty.
              • Use me to classify against all your custom classifiers. However, for better performance use classifier_ids to specify the specific custom classifiers to apply.
              • Use both IBM and me to analyze the image against both classifier categories.
            • Which classifiers to apply. Overrides the owners parameter. You can specify both custom and built-in classifier IDs. The built-in default classifier is used if both classifier_ids and owners parameters are empty.

              The following built-in classifier IDs require no training:

              • default: Returns classes from thousands of general tags.
              • food: Enhances specificity and accuracy for images of food items.
              • explicit: Evaluates whether the image might be pornographic.
            • The desired language of parts of the response. See the response for details.

              Allowable values: [en,ar,de,es,fr,it,ja,ko,pt-br,zh-cn,zh-tw]

              Default: en

            parameters

            • An image file (.gif, .jpg, .png, .tif) or .zip file with images. Maximum image size is 10 MB. Include no more than 20 images and limit the .zip file to 100 MB. Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

              You can also include an image with the url parameter.

            • The filename for images_file.

            • The content type of images_file.

            • The URL of an image (.gif, .jpg, .png, .tif) to analyze. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The maximum image size is 10 MB.

              You can also include images with the images_file parameter.

            • The minimum score a class must have to be displayed in the response. Set the threshold to 0.0 to return all identified classes.

            • The categories of classifiers to apply. The classifier_ids parameter overrides owners, so make sure that classifier_ids is empty.

              • Use IBM to classify against the default general classifier. You get the same result if both classifier_ids and owners parameters are empty.
              • Use me to classify against all your custom classifiers. However, for better performance use classifier_ids to specify the specific custom classifiers to apply.
              • Use both IBM and me to analyze the image against both classifier categories.
            • Which classifiers to apply. Overrides the owners parameter. You can specify both custom and built-in classifier IDs. The built-in default classifier is used if both classifier_ids and owners parameters are empty.

              The following built-in classifier IDs require no training:

              • default: Returns classes from thousands of general tags.
              • food: Enhances specificity and accuracy for images of food items.
              • explicit: Evaluates whether the image might be pornographic.
            • The desired language of parts of the response. See the response for details.

              Allowable values: [en,ar,de,es,fr,it,ja,ko,pt-br,zh-cn,zh-tw]

              Default: en

            parameters

            • An image file (.gif, .jpg, .png, .tif) or .zip file with images. Maximum image size is 10 MB. Include no more than 20 images and limit the .zip file to 100 MB. Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

              You can also include an image with the url parameter.

            • The filename for images_file.

            • The content type of images_file.

            • The URL of an image (.gif, .jpg, .png, .tif) to analyze. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The maximum image size is 10 MB.

              You can also include images with the images_file parameter.

            • The minimum score a class must have to be displayed in the response. Set the threshold to 0.0 to return all identified classes.

            • The categories of classifiers to apply. The classifier_ids parameter overrides owners, so make sure that classifier_ids is empty.

              • Use IBM to classify against the default general classifier. You get the same result if both classifier_ids and owners parameters are empty.
              • Use me to classify against all your custom classifiers. However, for better performance use classifier_ids to specify the specific custom classifiers to apply.
              • Use both IBM and me to analyze the image against both classifier categories.
            • Which classifiers to apply. Overrides the owners parameter. You can specify both custom and built-in classifier IDs. The built-in default classifier is used if both classifier_ids and owners parameters are empty.

              The following built-in classifier IDs require no training:

              • default: Returns classes from thousands of general tags.
              • food: Enhances specificity and accuracy for images of food items.
              • explicit: Evaluates whether the image might be pornographic.
            • The desired language of parts of the response. See the response for details.

              Allowable values: [en,ar,de,es,fr,it,ja,ko,pt-br,zh-cn,zh-tw]

              Default: en

            parameters

            • An image file (.gif, .jpg, .png, .tif) or .zip file with images. Maximum image size is 10 MB. Include no more than 20 images and limit the .zip file to 100 MB. Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

              You can also include an image with the url parameter.

            • The filename for imagesFile.

            • The content type of imagesFile.

            • The URL of an image (.gif, .jpg, .png, .tif) to analyze. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The maximum image size is 10 MB.

              You can also include images with the images_file parameter.

            • The minimum score a class must have to be displayed in the response. Set the threshold to 0.0 to return all identified classes.

            • The categories of classifiers to apply. The classifier_ids parameter overrides owners, so make sure that classifier_ids is empty.

              • Use IBM to classify against the default general classifier. You get the same result if both classifier_ids and owners parameters are empty.
              • Use me to classify against all your custom classifiers. However, for better performance use classifier_ids to specify the specific custom classifiers to apply.
              • Use both IBM and me to analyze the image against both classifier categories.
            • Which classifiers to apply. Overrides the owners parameter. You can specify both custom and built-in classifier IDs. The built-in default classifier is used if both classifier_ids and owners parameters are empty.

              The following built-in classifier IDs require no training:

              • default: Returns classes from thousands of general tags.
              • food: Enhances specificity and accuracy for images of food items.
              • explicit: Evaluates whether the image might be pornographic.
            • The desired language of parts of the response. See the response for details.

              Allowable values: [en,ar,de,es,fr,it,ja,ko,pt-br,zh-cn,zh-tw]

              Default: en

            parameters

            • An image file (.gif, .jpg, .png, .tif) or .zip file with images. Maximum image size is 10 MB. Include no more than 20 images and limit the .zip file to 100 MB. Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

              You can also include an image with the url parameter.

            • The filename for imagesFile.

            • The content type of imagesFile.

            • The URL of an image (.gif, .jpg, .png, .tif) to analyze. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The maximum image size is 10 MB.

              You can also include images with the images_file parameter.

            • The minimum score a class must have to be displayed in the response. Set the threshold to 0.0 to return all identified classes.

            • The categories of classifiers to apply. The classifier_ids parameter overrides owners, so make sure that classifier_ids is empty.

              • Use IBM to classify against the default general classifier. You get the same result if both classifier_ids and owners parameters are empty.
              • Use me to classify against all your custom classifiers. However, for better performance use classifier_ids to specify the specific custom classifiers to apply.
              • Use both IBM and me to analyze the image against both classifier categories.
            • Which classifiers to apply. Overrides the owners parameter. You can specify both custom and built-in classifier IDs. The built-in default classifier is used if both classifier_ids and owners parameters are empty.

              The following built-in classifier IDs require no training:

              • default: Returns classes from thousands of general tags.
              • food: Enhances specificity and accuracy for images of food items.
              • explicit: Evaluates whether the image might be pornographic.
            • The desired language of parts of the response. See the response for details.

              Allowable values: [en,ar,de,es,fr,it,ja,ko,pt-br,zh-cn,zh-tw]

              Default: en

            parameters

            • An image file (.gif, .jpg, .png, .tif) or .zip file with images. Maximum image size is 10 MB. Include no more than 20 images and limit the .zip file to 100 MB. Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

              You can also include an image with the url parameter.

            • The filename for imagesFile.

            • The content type of imagesFile.

            • The URL of an image (.gif, .jpg, .png, .tif) to analyze. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The maximum image size is 10 MB.

              You can also include images with the images_file parameter.

            • The minimum score a class must have to be displayed in the response. Set the threshold to 0.0 to return all identified classes.

            • The categories of classifiers to apply. The classifier_ids parameter overrides owners, so make sure that classifier_ids is empty.

              • Use IBM to classify against the default general classifier. You get the same result if both classifier_ids and owners parameters are empty.
              • Use me to classify against all your custom classifiers. However, for better performance use classifier_ids to specify the specific custom classifiers to apply.
              • Use both IBM and me to analyze the image against both classifier categories.
            • Which classifiers to apply. Overrides the owners parameter. You can specify both custom and built-in classifier IDs. The built-in default classifier is used if both classifier_ids and owners parameters are empty.

              The following built-in classifier IDs require no training:

              • default: Returns classes from thousands of general tags.
              • food: Enhances specificity and accuracy for images of food items.
              • explicit: Evaluates whether the image might be pornographic.
            • The desired language of parts of the response. See the response for details.

              Allowable values: [en,ar,de,es,fr,it,ja,ko,pt-br,zh-cn,zh-tw]

              Default: en

            Response

            Results for all images.

            Results for all images.

            Results for all images.

            Results for all images.

            Results for all images.

            Results for all images.

            Results for all images.

            Results for all images.

            Results for all images.

            Status Code

            • success

            • Invalid request due to user input, for example:

              • Bad JSON input
              • Bad query parameter or header
              • Invalid output language
              • No input images
              • The size of the image file in the request is larger than the maximum supported size
              • Corrupt .zip file
            • No API key or the key is not valid.

            • The .zip file is too large.

            Example responses

            Detect faces in an image

            Important: On April 2, 2018, the identity information in the response to calls to the Face model was removed. The identity information refers to the name of the person, score, and type_hierarchy knowledge graph. For details about the enhanced Face model, see the Release notes.

            Analyze and get data about faces in images. Responses can include estimated age and gender. This feature uses a built-in model, so no training is necessary. The Detect faces method does not support general biometric facial recognition.

            Supported image formats include .gif, .jpg, .png, and .tif. The maximum image size is 10 MB. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels.

            GET /v3/detect_faces
            Request

            Custom Headers

            • The desired language of parts of the response. See the response for details.

              Allowable values: [en,ar,de,es,fr,it,ja,ko,pt-br,zh-cn,zh-tw]

              Default: en

            Query Parameters

            • The release date of the version of the API you want to use. Specify dates in YYYY-MM-DD format. The current version is 2018-03-19.

            • The URL of an image to analyze. Must be in .gif, .jpg, .png, or .tif format. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The maximum image size is 10 MB. Redirects are followed, so you can use a shortened URL.

            Response

            Results for all faces

            Status Code

            • success

            • Invalid request

            Example responses

            Detect faces in images

            Important: On April 2, 2018, the identity information in the response to calls to the Face model was removed. The identity information refers to the name of the person, score, and type_hierarchy knowledge graph. For details about the enhanced Face model, see the Release notes.

            Analyze and get data about faces in images. Responses can include estimated age and gender. This feature uses a built-in model, so no training is necessary. The Detect faces method does not support general biometric facial recognition.

            Supported image formats include .gif, .jpg, .png, and .tif. The maximum image size is 10 MB. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels.

            Important: On April 2, 2018, the identity information in the response to calls to the Face model was removed. The identity information refers to the name of the person, score, and type_hierarchy knowledge graph. For details about the enhanced Face model, see the Release notes.

            Analyze and get data about faces in images. Responses can include estimated age and gender. This feature uses a built-in model, so no training is necessary. The Detect faces method does not support general biometric facial recognition.

            Supported image formats include .gif, .jpg, .png, and .tif. The maximum image size is 10 MB. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels.

            Important: On April 2, 2018, the identity information in the response to calls to the Face model was removed. The identity information refers to the name of the person, score, and type_hierarchy knowledge graph. For details about the enhanced Face model, see the Release notes.

            Analyze and get data about faces in images. Responses can include estimated age and gender. This feature uses a built-in model, so no training is necessary. The Detect faces method does not support general biometric facial recognition.

            Supported image formats include .gif, .jpg, .png, and .tif. The maximum image size is 10 MB. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels.

            Important: On April 2, 2018, the identity information in the response to calls to the Face model was removed. The identity information refers to the name of the person, score, and type_hierarchy knowledge graph. For details about the enhanced Face model, see the Release notes.

            Analyze and get data about faces in images. Responses can include estimated age and gender. This feature uses a built-in model, so no training is necessary. The Detect faces method does not support general biometric facial recognition.

            Supported image formats include .gif, .jpg, .png, and .tif. The maximum image size is 10 MB. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels.

            Important: On April 2, 2018, the identity information in the response to calls to the Face model was removed. The identity information refers to the name of the person, score, and type_hierarchy knowledge graph. For details about the enhanced Face model, see the Release notes.

            Analyze and get data about faces in images. Responses can include estimated age and gender. This feature uses a built-in model, so no training is necessary. The Detect faces method does not support general biometric facial recognition.

            Supported image formats include .gif, .jpg, .png, and .tif. The maximum image size is 10 MB. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels.

            Important: On April 2, 2018, the identity information in the response to calls to the Face model was removed. The identity information refers to the name of the person, score, and type_hierarchy knowledge graph. For details about the enhanced Face model, see the Release notes.

            Analyze and get data about faces in images. Responses can include estimated age and gender. This feature uses a built-in model, so no training is necessary. The Detect faces method does not support general biometric facial recognition.

            Supported image formats include .gif, .jpg, .png, and .tif. The maximum image size is 10 MB. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels.

            Important: On April 2, 2018, the identity information in the response to calls to the Face model was removed. The identity information refers to the name of the person, score, and type_hierarchy knowledge graph. For details about the enhanced Face model, see the Release notes.

            Analyze and get data about faces in images. Responses can include estimated age and gender. This feature uses a built-in model, so no training is necessary. The Detect faces method does not support general biometric facial recognition.

            Supported image formats include .gif, .jpg, .png, and .tif. The maximum image size is 10 MB. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels.

            Important: On April 2, 2018, the identity information in the response to calls to the Face model was removed. The identity information refers to the name of the person, score, and type_hierarchy knowledge graph. For details about the enhanced Face model, see the Release notes.

            Analyze and get data about faces in images. Responses can include estimated age and gender. This feature uses a built-in model, so no training is necessary. The Detect faces method does not support general biometric facial recognition.

            Supported image formats include .gif, .jpg, .png, and .tif. The maximum image size is 10 MB. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels.

            Important: On April 2, 2018, the identity information in the response to calls to the Face model was removed. The identity information refers to the name of the person, score, and type_hierarchy knowledge graph. For details about the enhanced Face model, see the Release notes.

            Analyze and get data about faces in images. Responses can include estimated age and gender. This feature uses a built-in model, so no training is necessary. The Detect faces method does not support general biometric facial recognition.

            Supported image formats include .gif, .jpg, .png, and .tif. The maximum image size is 10 MB. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels.

            POST /v3/detect_faces
            (visualRecognition *VisualRecognitionV3) DetectFaces(detectFacesOptions *DetectFacesOptions) (*core.DetailedResponse, error)
            ServiceCall<DetectedFaces> detectFaces(DetectFacesOptions detectFacesOptions)
            detectFaces(params, [callback()])
            detect_faces(self, images_file=None, images_filename=None, images_file_content_type=None, url=None, accept_language=None, **kwargs)
            detect_faces(images_file: nil, images_filename: nil, images_file_content_type: nil, url: nil, accept_language: nil)
            func detectFaces(
                imagesFile: Data? = nil,
                imagesFilename: String? = nil,
                imagesFileContentType: String? = nil,
                url: String? = nil,
                acceptLanguage: String? = nil,
                headers: [String: String]? = nil,
                completionHandler: @escaping (WatsonResponse<DetectedFaces>?, WatsonError?) -> Void)
            DetectFaces(System.IO.MemoryStream imagesFile = null, string imagesFilename = null, string imagesFileContentType = null, string url = null, string acceptLanguage = null)
            DetectFaces(Callback<DetectedFaces> callback, System.IO.MemoryStream imagesFile = null, string imagesFilename = null, string imagesFileContentType = null, string url = null, string acceptLanguage = null)
            Request

            Instantiate the DetectFacesOptions struct and set the fields to provide parameter values for the DetectFaces method.

            Use the DetectFacesOptions.Builder to create a DetectFacesOptions object that contains the parameter values for the detectFaces method.

            Custom Headers

            • The desired language of parts of the response. See the response for details.

              Allowable values: [en,ar,de,es,fr,it,ja,ko,pt-br,zh-cn,zh-tw]

              Default: en

            Query Parameters

            • The release date of the version of the API you want to use. Specify dates in YYYY-MM-DD format. The current version is 2018-03-19.

            Form Parameters

            • An image file (gif, .jpg, .png, .tif.) or .zip file with images. Limit the .zip file to 100 MB. You can include a maximum of 15 images in a request.

              Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

              You can also include an image with the url parameter.

            • The URL of an image to analyze. Must be in .gif, .jpg, .png, or .tif format. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The maximum image size is 10 MB. Redirects are followed, so you can use a shortened URL.

              You can also include images with the images_file parameter.

            The DetectFaces options.

            The detectFaces options.

            parameters

            • An image file (gif, .jpg, .png, .tif.) or .zip file with images. Limit the .zip file to 100 MB. You can include a maximum of 15 images in a request.

              Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

              You can also include an image with the url parameter.

            • The filename for images_file.

            • The content type of images_file.

            • The URL of an image to analyze. Must be in .gif, .jpg, .png, or .tif format. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The maximum image size is 10 MB. Redirects are followed, so you can use a shortened URL.

              You can also include images with the images_file parameter.

            • The desired language of parts of the response. See the response for details.

              Allowable values: [en,ar,de,es,fr,it,ja,ko,pt-br,zh-cn,zh-tw]

              Default: en

            parameters

            • An image file (gif, .jpg, .png, .tif.) or .zip file with images. Limit the .zip file to 100 MB. You can include a maximum of 15 images in a request.

              Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

              You can also include an image with the url parameter.

            • The filename for images_file.

            • The content type of images_file.

            • The URL of an image to analyze. Must be in .gif, .jpg, .png, or .tif format. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The maximum image size is 10 MB. Redirects are followed, so you can use a shortened URL.

              You can also include images with the images_file parameter.

            • The desired language of parts of the response. See the response for details.

              Allowable values: [en,ar,de,es,fr,it,ja,ko,pt-br,zh-cn,zh-tw]

              Default: en

            parameters

            • An image file (gif, .jpg, .png, .tif.) or .zip file with images. Limit the .zip file to 100 MB. You can include a maximum of 15 images in a request.

              Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

              You can also include an image with the url parameter.

            • The filename for images_file.

            • The content type of images_file.

            • The URL of an image to analyze. Must be in .gif, .jpg, .png, or .tif format. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The maximum image size is 10 MB. Redirects are followed, so you can use a shortened URL.

              You can also include images with the images_file parameter.

            • The desired language of parts of the response. See the response for details.

              Allowable values: [en,ar,de,es,fr,it,ja,ko,pt-br,zh-cn,zh-tw]

              Default: en

            parameters

            • An image file (gif, .jpg, .png, .tif.) or .zip file with images. Limit the .zip file to 100 MB. You can include a maximum of 15 images in a request.

              Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

              You can also include an image with the url parameter.

            • The filename for imagesFile.

            • The content type of imagesFile.

            • The URL of an image to analyze. Must be in .gif, .jpg, .png, or .tif format. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The maximum image size is 10 MB. Redirects are followed, so you can use a shortened URL.

              You can also include images with the images_file parameter.

            • The desired language of parts of the response. See the response for details.

              Allowable values: [en,ar,de,es,fr,it,ja,ko,pt-br,zh-cn,zh-tw]

              Default: en

            parameters

            • An image file (gif, .jpg, .png, .tif.) or .zip file with images. Limit the .zip file to 100 MB. You can include a maximum of 15 images in a request.

              Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

              You can also include an image with the url parameter.

            • The filename for imagesFile.

            • The content type of imagesFile.

            • The URL of an image to analyze. Must be in .gif, .jpg, .png, or .tif format. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The maximum image size is 10 MB. Redirects are followed, so you can use a shortened URL.

              You can also include images with the images_file parameter.

            • The desired language of parts of the response. See the response for details.

              Allowable values: [en,ar,de,es,fr,it,ja,ko,pt-br,zh-cn,zh-tw]

              Default: en

            parameters

            • An image file (gif, .jpg, .png, .tif.) or .zip file with images. Limit the .zip file to 100 MB. You can include a maximum of 15 images in a request.

              Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

              You can also include an image with the url parameter.

            • The filename for imagesFile.

            • The content type of imagesFile.

            • The URL of an image to analyze. Must be in .gif, .jpg, .png, or .tif format. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The maximum image size is 10 MB. Redirects are followed, so you can use a shortened URL.

              You can also include images with the images_file parameter.

            • The desired language of parts of the response. See the response for details.

              Allowable values: [en,ar,de,es,fr,it,ja,ko,pt-br,zh-cn,zh-tw]

              Default: en

            Response

            Results for all faces

            Results for all faces.

            Results for all faces.

            Results for all faces.

            Results for all faces.

            Results for all faces.

            Results for all faces.

            Results for all faces.

            Results for all faces.

            Status Code

            • success

            • Invalid request

            Example responses

            Create a classifier

            Train a new multi-faceted classifier on the uploaded image data. Create your custom classifier with positive or negative example training images. Include at least two sets of examples, either two positive example files or one positive and one negative file. You can upload a maximum of 256 MB per call.

            Tips when creating:

            • If you set the X-Watson-Learning-Opt-Out header parameter to true when you create a classifier, the example training images are not stored. Save your training images locally. For more information, see Data collection.

            • Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.

            Train a new multi-faceted classifier on the uploaded image data. Create your custom classifier with positive or negative example training images. Include at least two sets of examples, either two positive example files or one positive and one negative file. You can upload a maximum of 256 MB per call.

            Tips when creating:

            • If you set the X-Watson-Learning-Opt-Out header parameter to true when you create a classifier, the example training images are not stored. Save your training images locally. For more information, see Data collection.

            • Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.

            Train a new multi-faceted classifier on the uploaded image data. Create your custom classifier with positive or negative example training images. Include at least two sets of examples, either two positive example files or one positive and one negative file. You can upload a maximum of 256 MB per call.

            Tips when creating:

            • If you set the X-Watson-Learning-Opt-Out header parameter to true when you create a classifier, the example training images are not stored. Save your training images locally. For more information, see Data collection.

            • Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.

            Train a new multi-faceted classifier on the uploaded image data. Create your custom classifier with positive or negative example training images. Include at least two sets of examples, either two positive example files or one positive and one negative file. You can upload a maximum of 256 MB per call.

            Tips when creating:

            • If you set the X-Watson-Learning-Opt-Out header parameter to true when you create a classifier, the example training images are not stored. Save your training images locally. For more information, see Data collection.

            • Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.

            Train a new multi-faceted classifier on the uploaded image data. Create your custom classifier with positive or negative example training images. Include at least two sets of examples, either two positive example files or one positive and one negative file. You can upload a maximum of 256 MB per call.

            Tips when creating:

            • If you set the X-Watson-Learning-Opt-Out header parameter to true when you create a classifier, the example training images are not stored. Save your training images locally. For more information, see Data collection.

            • Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.

            Train a new multi-faceted classifier on the uploaded image data. Create your custom classifier with positive or negative example training images. Include at least two sets of examples, either two positive example files or one positive and one negative file. You can upload a maximum of 256 MB per call.

            Tips when creating:

            • If you set the X-Watson-Learning-Opt-Out header parameter to true when you create a classifier, the example training images are not stored. Save your training images locally. For more information, see Data collection.

            • Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.

            Train a new multi-faceted classifier on the uploaded image data. Create your custom classifier with positive or negative example training images. Include at least two sets of examples, either two positive example files or one positive and one negative file. You can upload a maximum of 256 MB per call.

            Tips when creating:

            • If you set the X-Watson-Learning-Opt-Out header parameter to true when you create a classifier, the example training images are not stored. Save your training images locally. For more information, see Data collection.

            • Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.

            Train a new multi-faceted classifier on the uploaded image data. Create your custom classifier with positive or negative example training images. Include at least two sets of examples, either two positive example files or one positive and one negative file. You can upload a maximum of 256 MB per call.

            Tips when creating:

            • If you set the X-Watson-Learning-Opt-Out header parameter to true when you create a classifier, the example training images are not stored. Save your training images locally. For more information, see Data collection.

            • Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.

            Train a new multi-faceted classifier on the uploaded image data. Create your custom classifier with positive or negative example training images. Include at least two sets of examples, either two positive example files or one positive and one negative file. You can upload a maximum of 256 MB per call.

            Tips when creating:

            • If you set the X-Watson-Learning-Opt-Out header parameter to true when you create a classifier, the example training images are not stored. Save your training images locally. For more information, see Data collection.

            • Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.

            POST /v3/classifiers
            (visualRecognition *VisualRecognitionV3) CreateClassifier(createClassifierOptions *CreateClassifierOptions) (*core.DetailedResponse, error)
            ServiceCall<Classifier> createClassifier(CreateClassifierOptions createClassifierOptions)
            createClassifier(params, [callback()])
            create_classifier(self, name, positive_examples, negative_examples=None, negative_examples_filename=None, **kwargs)
            create_classifier(name:, positive_examples:, negative_examples: nil, negative_examples_filename: nil)
            func createClassifier(
                name: String,
                positiveExamples: [String: Data],
                negativeExamples: Data? = nil,
                negativeExamplesFilename: String? = nil,
                headers: [String: String]? = nil,
                completionHandler: @escaping (WatsonResponse<Classifier>?, WatsonError?) -> Void)
            CreateClassifier(string name, Dictionary<string, System.IO.MemoryStream> positiveExamples, System.IO.MemoryStream negativeExamples = null, string negativeExamplesFilename = null)
            CreateClassifier(Callback<Classifier> callback, string name, Dictionary<string, System.IO.MemoryStream> positiveExamples, System.IO.MemoryStream negativeExamples = null, string negativeExamplesFilename = null)
            Request

            Instantiate the CreateClassifierOptions struct and set the fields to provide parameter values for the CreateClassifier method.

            Use the CreateClassifierOptions.Builder to create a CreateClassifierOptions object that contains the parameter values for the createClassifier method.

            Query Parameters

            • The release date of the version of the API you want to use. Specify dates in YYYY-MM-DD format. The current version is 2018-03-19.

            Form Parameters

            • The name of the new classifier. Encode special characters in UTF-8.

            • A .zip file of images that depict the visual subject of a class in the new classifier. You can include more than one positive example file in a call.

              Specify the parameter name by appending _positive_examples to the class name. For example, goldenretriever_positive_examples creates the class goldenretriever.

              Include at least 10 images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of images is 10,000 images or 100 MB per .zip file.

              Encode special characters in the file name in UTF-8.

            • A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must contain a minimum of 10 images.

              Encode special characters in the file name in UTF-8.

            The CreateClassifier options.

            The createClassifier options.

            parameters

            • The name of the new classifier. Encode special characters in UTF-8.

            • A dictionary that contains the value for each classname. The value is a .zip file of images that depict the visual subject of a class in the new classifier. You can include more than one positive example file in a call.

              Specify the parameter name by appending _positive_examples to the class name. For example, goldenretriever_positive_examples creates the class goldenretriever.

              Include at least 10 images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of images is 10,000 images or 100 MB per .zip file.

              Encode special characters in the file name in UTF-8.

            • A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must contain a minimum of 10 images.

              Encode special characters in the file name in UTF-8.

            • The filename for negative_examples.

            parameters

            • The name of the new classifier. Encode special characters in UTF-8.

            • A dictionary that contains the value for each classname. The value is a .zip file of images that depict the visual subject of a class in the new classifier. You can include more than one positive example file in a call.

              Specify the parameter name by appending _positive_examples to the class name. For example, goldenretriever_positive_examples creates the class goldenretriever.

              Include at least 10 images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of images is 10,000 images or 100 MB per .zip file.

              Encode special characters in the file name in UTF-8.

            • A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must contain a minimum of 10 images.

              Encode special characters in the file name in UTF-8.

            • The filename for negative_examples.

            parameters

            • The name of the new classifier. Encode special characters in UTF-8.

            • A .zip file of images that depict the visual subject of a class in the new classifier. You can include more than one positive example file in a call.

              Specify the parameter name by appending _positive_examples to the class name. For example, goldenretriever_positive_examples creates the class goldenretriever.

              Include at least 10 images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of images is 10,000 images or 100 MB per .zip file.

              Encode special characters in the file name in UTF-8.

            • A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must contain a minimum of 10 images.

              Encode special characters in the file name in UTF-8.

            • The filename for negative_examples.

            parameters

            • The name of the new classifier. Encode special characters in UTF-8.

            • A dictionary that contains the value for each classname. The value is a .zip file of images that depict the visual subject of a class in the new classifier. You can include more than one positive example file in a call.

              Specify the parameter name by appending _positive_examples to the class name. For example, goldenretriever_positive_examples creates the class goldenretriever.

              Include at least 10 images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of images is 10,000 images or 100 MB per .zip file.

              Encode special characters in the file name in UTF-8.

            • A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must contain a minimum of 10 images.

              Encode special characters in the file name in UTF-8.

            • The filename for negativeExamples.

            parameters

            • The name of the new classifier. Encode special characters in UTF-8.

            • A dictionary that contains the value for each classname. The value is a .zip file of images that depict the visual subject of a class in the new classifier. You can include more than one positive example file in a call.

              Specify the parameter name by appending _positive_examples to the class name. For example, goldenretriever_positive_examples creates the class goldenretriever.

              Include at least 10 images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of images is 10,000 images or 100 MB per .zip file.

              Encode special characters in the file name in UTF-8.

            • A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must contain a minimum of 10 images.

              Encode special characters in the file name in UTF-8.

            • The filename for negativeExamples.

            parameters

            • The name of the new classifier. Encode special characters in UTF-8.

            • A dictionary that contains the value for each classname. The value is a .zip file of images that depict the visual subject of a class in the new classifier. You can include more than one positive example file in a call.

              Specify the parameter name by appending _positive_examples to the class name. For example, goldenretriever_positive_examples creates the class goldenretriever.

              Include at least 10 images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of images is 10,000 images or 100 MB per .zip file.

              Encode special characters in the file name in UTF-8.

            • A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must contain a minimum of 10 images.

              Encode special characters in the file name in UTF-8.

            • The filename for negativeExamples.

            Response

            Information about a classifier.

            Information about a classifier.

            Information about a classifier.

            Information about a classifier.

            Information about a classifier.

            Information about a classifier.

            Information about a classifier.

            Information about a classifier.

            Information about a classifier.

            Status Code

            • success

            • Invalid request due to user input, for example:

              • Bad query parameter or header
              • No input images
              • The size of the image file in the request is larger than the maximum supported size
              • Corrupt .zip file
              • Cannot find the classifier
            • No API key or the key is not valid.

            • The .zip file is too large.

            Example responses

            Retrieve a list of classifiers

            GET /v3/classifiers
            (visualRecognition *VisualRecognitionV3) ListClassifiers(listClassifiersOptions *ListClassifiersOptions) (*core.DetailedResponse, error)
            ServiceCall<Classifiers> listClassifiers(ListClassifiersOptions listClassifiersOptions)
            listClassifiers(params, [callback()])
            list_classifiers(self, verbose=None, **kwargs)
            list_classifiers(verbose: nil)
            func listClassifiers(
                verbose: Bool? = nil,
                headers: [String: String]? = nil,
                completionHandler: @escaping (WatsonResponse<Classifiers>?, WatsonError?) -> Void)
            ListClassifiers(bool? verbose = null)
            ListClassifiers(Callback<Classifiers> callback, bool? verbose = null)
            Request

            Instantiate the ListClassifiersOptions struct and set the fields to provide parameter values for the ListClassifiers method.

            Use the ListClassifiersOptions.Builder to create a ListClassifiersOptions object that contains the parameter values for the listClassifiers method.

            Query Parameters

            • The release date of the version of the API you want to use. Specify dates in YYYY-MM-DD format. The current version is 2018-03-19.

            • Specify true to return details about the classifiers. Omit this parameter to return a brief list of classifiers.

            The ListClassifiers options.

            The listClassifiers options.

            parameters

            • Specify true to return details about the classifiers. Omit this parameter to return a brief list of classifiers.

            parameters

            • Specify true to return details about the classifiers. Omit this parameter to return a brief list of classifiers.

            parameters

            • Specify true to return details about the classifiers. Omit this parameter to return a brief list of classifiers.

            parameters

            • Specify true to return details about the classifiers. Omit this parameter to return a brief list of classifiers.

            parameters

            • Specify true to return details about the classifiers. Omit this parameter to return a brief list of classifiers.

            parameters

            • Specify true to return details about the classifiers. Omit this parameter to return a brief list of classifiers.