Introduction
IBM Watson™ Visual Recognition is discontinued. Existing instances are supported until 1 December 2021, but as of 7 January 2021, you can't create instances. Any instance that is provisioned on 1 December 2021 will be deleted.
The IBM Watson Visual Recognition service uses deep learning algorithms to identify scenes and objects in images that you upload to the service. You can create and train a custom classifier to identify subjects that suit your needs.
This documentation describes Java SDK major version 9. For more information about how to update your code from the previous version, see the migration guide.
This documentation describes Node SDK major version 6. For more information about how to update your code from the previous version, see the migration guide.
This documentation describes Python SDK major version 5. For more information about how to update your code from the previous version, see the migration guide.
This documentation describes Ruby SDK major version 2. For more information about how to update your code from the previous version, see the migration guide.
This documentation describes .NET Standard SDK major version 5. For more information about how to update your code from the previous version, see the migration guide.
This documentation describes Go SDK major version 2. For more information about how to update your code from the previous version, see the migration guide.
This documentation describes Swift SDK major version 4. For more information about how to update your code from the previous version, see the migration guide.
This documentation describes Unity SDK major version 5. For more information about how to update your code from the previous version, see the migration guide.
The IBM Watson Unity SDK has the following requirements.
- The SDK requires Unity version 2018.2 or later to support Transport Layer Security (TLS) 1.2.
- Set the project settings for both the Scripting Runtime Version and the Api Compatibility Level to
.NET 4.x Equivalent
. - For more information, see TLS 1.0 support.
- Set the project settings for both the Scripting Runtime Version and the Api Compatibility Level to
- The SDK doesn't support the WebGL projects. Change your build settings to any platform except
WebGL
.
For more information about how to install and configure the SDK and SDK Core, see https://github.com/watson-developer-cloud/unity-sdk.
The code examples on this tab use the client library that is provided for Java.
Maven
<dependency>
<groupId>com.ibm.watson</groupId>
<artifactId>ibm-watson</artifactId>
<version>9.0.1</version>
</dependency>
Gradle
compile 'com.ibm.watson:ibm-watson:9.0.1'
GitHub
The code examples on this tab use the client library that is provided for Node.js.
Installation
npm install ibm-watson@^6.0.0
GitHub
The code examples on this tab use the client library that is provided for Python.
Installation
pip install --upgrade "ibm-watson>=5.0.0"
GitHub
The code examples on this tab use the client library that is provided for Ruby.
Installation
gem install ibm_watson
GitHub
The code examples on this tab use the client library that is provided for Go.
go get -u github.com/watson-developer-cloud/go-sdk@v2.0.0
GitHub
The code examples on this tab use the client library that is provided for Swift.
Cocoapods
pod 'IBMWatsonVisualRecognitionV3', '~> 4.0.0'
Carthage
github "watson-developer-cloud/swift-sdk" ~> 4.0.0
Swift Package Manager
.package(url: "https://github.com/watson-developer-cloud/swift-sdk", from: "4.0.0")
GitHub
The code examples on this tab use the client library that is provided for .NET Standard.
Package Manager
Install-Package IBM.Watson.VisualRecognition.v3 -Version 5.0.0
.NET CLI
dotnet add package IBM.Watson.VisualRecognition.v3 --version 5.0.0
PackageReference
<PackageReference Include="IBM.Watson.VisualRecognition.v3" Version="5.0.0" />
GitHub
The code examples on this tab use the client library that is provided for Unity.
GitHub
Authentication
You authenticate to the API by using IBM Cloud Identity and Access Management (IAM).
You can pass either a bearer token in an authorization header or an API key. Tokens support authenticated requests without embedding service credentials in every call. API keys use basic authentication. For more information, see Authenticating to Watson services.
- For testing and development, you can pass an API key directly.
- For production use, unless you use the Watson SDKs, use an IAM token.
If you pass in an API key, use apikey
for the username and the value of the API key as the password. For example, if the API key is f5sAznhrKQyvBFFaZbtF60m5tzLbqWhyALQawBg5TjRI
in the service credentials, include the credentials in your call like this:
curl -u "apikey:f5sAznhrKQyvBFFaZbtF60m5tzLbqWhyALQawBg5TjRI"
IBM Cloud Dedicated instances for Visual Recognition authenticate by providing the username and password for the service instance.
For IBM Cloud instances, the SDK provides initialization methods for each form of authentication.
- Use the API key to have the SDK manage the lifecycle of the access token. The SDK requests an access token, ensures that the access token is valid, and refreshes it if necessary.
- Use the access token to manage the lifecycle yourself. You must periodically refresh the token.
For more information, see IAM authentication with the SDK.For more information, see IAM authentication with the SDK.For more information, see IAM authentication with the SDK.For more information, see IAM authentication with the SDK.For more information, see IAM authentication with the SDK.For more information, see IAM authentication with the SDK.For more information, see IAM authentication with the SDK.For more information, see IAM authentication with the SDK.
Replace {apikey}
and {url}
with your service credentials.
curl -X {request_method} -u "apikey:{apikey}" "{url}/v3/{method}"
SDK managing the IAM token. Replace {apikey}
, {version}
, and {url}
.
IamAuthenticator authenticator = new IamAuthenticator("{apikey}");
VisualRecognition visualRecognition = new VisualRecognition("{version}", authenticator);
visualRecognition.setServiceUrl("{url}");
SDK managing the IAM token. Replace {apikey}
, {version}
, and {url}
.
const VisualRecognitionV3 = require('ibm-watson/visual-recognition/v3');
const { IamAuthenticator } = require('ibm-watson/auth');
const visualRecognition = new VisualRecognitionV3({
version: '{version}',
authenticator: new IamAuthenticator({
apikey: '{apikey}',
}),
serviceUrl: '{url}',
});
SDK managing the IAM token. Replace {apikey}
, {version}
, and {url}
.
from ibm_watson import VisualRecognitionV3
from ibm_cloud_sdk_core.authenticators import IAMAuthenticator
authenticator = IAMAuthenticator('{apikey}')
visual_recognition = VisualRecognitionV3(
version='{version}',
authenticator=authenticator
)
visual_recognition.set_service_url('{url}')
SDK managing the IAM token. Replace {apikey}
, {version}
, and {url}
.
require "ibm_watson/authenticators"
require "ibm_watson/visual_recognition_v3"
include IBMWatson
authenticator = Authenticators::IamAuthenticator.new(
apikey: "{apikey}"
)
visual_recognition = VisualRecognitionV3.new(
version: "{version}",
authenticator: authenticator
)
visual_recognition.service_url = "{url}"
SDK managing the IAM token. Replace {apikey}
, {version}
, and {url}
.
import (
"github.com/IBM/go-sdk-core/core"
"github.com/watson-developer-cloud/go-sdk/visualrecognitionv3"
)
func main() {
authenticator := &core.IamAuthenticator{
ApiKey: "{apikey}",
}
options := &visualrecognitionv3.VisualRecognitionV3Options{
Version: "{version}",
Authenticator: authenticator,
}
visualRecognition, visualRecognitionErr := visualrecognitionv3.NewVisualRecognitionV3(options)
if visualRecognitionErr != nil {
panic(visualRecognitionErr)
}
visualRecognition.SetServiceURL("{url}")
}
SDK managing the IAM token. Replace {apikey}
, {version}
, and {url}
.
let authenticator = WatsonIAMAuthenticator(apiKey: "{apikey}")
let visualRecognition = VisualRecognition(version: "{version}", authenticator: authenticator)
visualRecognition.serviceURL = "{url}"
SDK managing the IAM token. Replace {apikey}
, {version}
, and {url}
.
IamAuthenticator authenticator = new IamAuthenticator(
apikey: "{apikey}"
);
VisualRecognitionService visualRecognition = new VisualRecognitionService("{version}", authenticator);
visualRecognition.SetServiceUrl("{url}");
SDK managing the IAM token. Replace {apikey}
, {version}
, and {url}
.
var authenticator = new IamAuthenticator(
apikey: "{apikey}"
);
while (!authenticator.CanAuthenticate())
yield return null;
var visualRecognition = new VisualRecognitionService("{version}", authenticator);
visualRecognition.SetServiceUrl("{url}");
Access between services
Your application might use more than one Watson service. You can grant access between services and you can grant access to more than one service for your applications.
For IBM Cloud services, the method to grant access between Watson services varies depending on the type of API key. For more information, see IAM access.
- To grant access between IBM Cloud services, create an authorization between the services. For more information, see Granting access between services.
- To grant access to your services by applications without using user credentials, create a service ID, add an API key, and assign access policies. For more information, see Creating and working with service IDs.
Make sure that you use an endpoint URL that includes the service instance ID (for example, https://api.us-south.visual-recognition.watson.cloud.ibm.com/instances/6bbda3b3-d572-45e1-8c54-22d6ed9e52c2
). You can find the instance ID in two places:
- By clicking the service instance row in the Resource list. The instance ID is the GUID in the details pane.
By clicking the name of the service instance in the list and looking at the credentials URL.
If you don't see the instance ID in the URL, you can add new credentials from the Service credentials page.
IBM Cloud URLs
The base URLs come from the service instance. To find the URL, view the service credentials by clicking the name of the service in the Resource list. Use the value of the URL. Add the method to form the complete API endpoint for your request.
The following example URL represents a Visual Recognition instance that is hosted in Frankfurt:
https://api.eu-de.visual-recognition.watson.cloud.ibm.com/instances/6bbda3b3-d572-45e1-8c54-22d6ed9e52c2
The following URLs represent the base URLs for Visual Recognition. When you call the API, use the URL that corresponds to the location of your service instance.
- Dallas:
https://api.us-south.visual-recognition.watson.cloud.ibm.com
- Frankfurt:
https://api.eu-de.visual-recognition.watson.cloud.ibm.com
- Seoul:
https://api.kr-seo.visual-recognition.watson.cloud.ibm.com
Set the correct service URL by calling the setServiceUrl()
method of the service instance.
Set the correct service URL by specifying the serviceUrl
parameter when you create the service instance.
Set the correct service URL by calling the set_service_url()
method of the service instance.
Set the correct service URL by specifying the service_url
property of the service instance.
Set the correct service URL by calling the SetServiceURL()
method of the service instance.
Set the correct service URL by setting the serviceURL
property of the service instance.
Set the correct service URL by calling the SetServiceUrl()
method of the service instance.
Set the correct service URL by calling the SetServiceUrl()
method of the service instance.
Dallas API endpoint example for services managed on IBM Cloud
curl -X {request_method} -u "apikey:{apikey}" "https://api.us-south.visual-recognition.watson.cloud.ibm.com/instances/{instance_id}"
Your service instance might not use this URL
Default URL
https://api.us-south.visual-recognition.watson.cloud.ibm.com
Example for the Frankfurt location
IamAuthenticator authenticator = new IamAuthenticator("{apikey}");
VisualRecognition visualRecognition = new VisualRecognition("{version}", authenticator);
visualRecognition.setServiceUrl("https://api.eu-de.visual-recognition.watson.cloud.ibm.com");
Default URL
https://api.us-south.visual-recognition.watson.cloud.ibm.com
Example for the Frankfurt location
const VisualRecognitionV3 = require('ibm-watson/visual-recognition/v3');
const { IamAuthenticator } = require('ibm-watson/auth');
const visualRecognition = new VisualRecognitionV3({
version: '{version}',
authenticator: new IamAuthenticator({
apikey: '{apikey}',
}),
serviceUrl: 'https://api.eu-de.visual-recognition.watson.cloud.ibm.com',
});
Default URL
https://api.us-south.visual-recognition.watson.cloud.ibm.com
Example for the Frankfurt location
from ibm_watson import VisualRecognitionV3
from ibm_cloud_sdk_core.authenticators import IAMAuthenticator
authenticator = IAMAuthenticator('{apikey}')
visual_recognition = VisualRecognitionV3(
version='{version}',
authenticator=authenticator
)
visual_recognition.set_service_url('https://api.eu-de.visual-recognition.watson.cloud.ibm.com')
Default URL
https://api.us-south.visual-recognition.watson.cloud.ibm.com
Example for the Frankfurt location
require "ibm_watson/authenticators"
require "ibm_watson/visual_recognition_v3"
include IBMWatson
authenticator = Authenticators::IamAuthenticator.new(
apikey: "{apikey}"
)
visual_recognition = VisualRecognitionV3.new(
version: "{version}",
authenticator: authenticator
)
visual_recognition.service_url = "https://api.eu-de.visual-recognition.watson.cloud.ibm.com"
Default URL
https://api.us-south.visual-recognition.watson.cloud.ibm.com
Example for the Frankfurt location
visualRecognition, visualRecognitionErr := visualrecognitionv3.NewVisualRecognitionV3(options)
if visualRecognitionErr != nil {
panic(visualRecognitionErr)
}
visualRecognition.SetServiceURL("https://api.eu-de.visual-recognition.watson.cloud.ibm.com")
Default URL
https://api.us-south.visual-recognition.watson.cloud.ibm.com
Example for the Frankfurt location
let authenticator = WatsonIAMAuthenticator(apiKey: "{apikey}")
let visualRecognition = VisualRecognition(version: "{version}", authenticator: authenticator)
visualRecognition.serviceURL = "https://api.eu-de.visual-recognition.watson.cloud.ibm.com"
Default URL
https://api.us-south.visual-recognition.watson.cloud.ibm.com
Example for the Frankfurt location
IamAuthenticator authenticator = new IamAuthenticator(
apikey: "{apikey}"
);
VisualRecognitionService visualRecognition = new VisualRecognitionService("{version}", authenticator);
visualRecognition.SetServiceUrl("https://api.eu-de.visual-recognition.watson.cloud.ibm.com");
Default URL
https://api.us-south.visual-recognition.watson.cloud.ibm.com
Example for the Frankfurt location
var authenticator = new IamAuthenticator(
apikey: "{apikey}"
);
while (!authenticator.CanAuthenticate())
yield return null;
var visualRecognition = new VisualRecognitionService("{version}", authenticator);
visualRecognition.SetServiceUrl("https://api.eu-de.visual-recognition.watson.cloud.ibm.com");
Disabling SSL verification
All Watson services use Secure Sockets Layer (SSL) (or Transport Layer Security (TLS)) for secure connections between the client and server. The connection is verified against the local certificate store to ensure authentication, integrity, and confidentiality.
If you use a self-signed certificate, you need to disable SSL verification to make a successful connection.
Enabling SSL verification is highly recommended. Disabling SSL jeopardizes the security of the connection and data. Disable SSL only if necessary, and take steps to enable SSL as soon as possible.
To disable SSL verification for a curl request, use the --insecure
(-k
) option with the request.
To disable SSL verification, create an HttpConfigOptions
object and set the disableSslVerification
property to true
. Then, pass the object to the service instance by using the configureClient
method.
To disable SSL verification, set the disableSslVerification
parameter to true
when you create the service instance.
To disable SSL verification, specify True
on the set_disable_ssl_verification
method for the service instance.
To disable SSL verification, set the disable_ssl_verification
parameter to true
in the configure_http_client()
method for the service instance.
To disable SSL verification, call the DisableSSLVerification
method on the service instance.
To disable SSL verification, call the disableSSLVerification()
method on the service instance. You cannot disable SSL verification on Linux.
To disable SSL verification, set the DisableSslVerification
method to true
on the service instance.
To disable SSL verification, set the DisableSslVerification
method to true
on the service instance.
Example to disable SSL verification. Replace {apikey}
and {url}
with your service credentials.
curl -k -X {request_method} -u "apikey:{apikey}" "{url}/{method}"
Example to disable SSL verification
IamAuthenticator authenticator = new IamAuthenticator("{apikey}");
VisualRecognition visualRecognition = new VisualRecognition("{version}", authenticator);
visualRecognition.setServiceUrl("{url}");
HttpConfigOptions configOptions = new HttpConfigOptions.Builder()
.disableSslVerification(true)
.build();
visualRecognition.configureClient(configOptions);
Example to disable SSL verification
const VisualRecognitionV3 = require('ibm-watson/visual-recognition/v3');
const { IamAuthenticator } = require('ibm-watson/auth');
const visualRecognition = new VisualRecognitionV3({
version: '{version}',
authenticator: new IamAuthenticator({
apikey: '{apikey}',
}),
serviceUrl: '{url}',
disableSslVerification: true,
});
Example to disable SSL verification
from ibm_watson import VisualRecognitionV3
from ibm_cloud_sdk_core.authenticators import IAMAuthenticator
authenticator = IAMAuthenticator('{apikey}')
visual_recognition = VisualRecognitionV3(
version='{version}',
authenticator=authenticator
)
visual_recognition.set_service_url('{url}')
visual_recognition.set_disable_ssl_verification(True)
Example to disable SSL verification
require "ibm_watson/authenticators"
require "ibm_watson/visual_recognition_v3"
include IBMWatson
authenticator = Authenticators::IamAuthenticator.new(
apikey: "{apikey}"
)
visual_recognition = VisualRecognitionV3.new(
version: "{version}",
authenticator: authenticator
)
visual_recognition.service_url = "{url}"
visual_recognition.configure_http_client(disable_ssl_verification: true)
Example to disable SSL verification
visualRecognition, visualRecognitionErr := visualrecognitionv3.NewVisualRecognitionV3(options)
if visualRecognitionErr != nil {
panic(visualRecognitionErr)
}
visualRecognition.SetServiceURL("{url}")
visualRecognition.DisableSSLVerification()
Example to disable SSL verification
let authenticator = WatsonIAMAuthenticator(apiKey: "{apikey}")
let visualRecognition = VisualRecognition(version: "{version}", authenticator: authenticator)
visualRecognition.serviceURL = "{url}"
visualRecognition.disableSSLVerification()
Example to disable SSL verification
IamAuthenticator authenticator = new IamAuthenticator(
apikey: "{apikey}"
);
VisualRecognitionService visualRecognition = new VisualRecognitionService("{version}", authenticator);
visualRecognition.SetServiceUrl("{url}");
visualRecognition.DisableSslVerification(true);
Example to disable SSL verification
var authenticator = new IamAuthenticator(
apikey: "{apikey}"
);
while (!authenticator.CanAuthenticate())
yield return null;
var visualRecognition = new VisualRecognitionService("{version}", authenticator);
visualRecognition.SetServiceUrl("{url}");
visualRecognition.DisableSslVerification = true;
Versioning
API requests require a version parameter that takes a date in the format version=YYYY-MM-DD
. When we change the API in a backwards-incompatible way, we release a new version date.
Send the version parameter with every API request. The service uses the API version for the date you specify, or the most recent version before that date. Don't default to the current date. Instead, specify a date that matches a version that is compatible with your app, and don't change it until your app is ready for a later version.
Specify the version to use on API requests with the version parameter when you create the service instance. The service uses the API version for the date you specify, or the most recent version before that date. Don't default to the current date. Instead, specify a date that matches a version that is compatible with your app, and don't change it until your app is ready for a later version.
This documentation describes the current version of Visual Recognition, 2018-03-19
. In some cases, differences in earlier versions are noted in the descriptions of parameters and response models.
Error handling
Visual Recognition uses standard HTTP response codes to indicate whether a method completed successfully. HTTP response codes in the 2xx range indicate success. A response in the 4xx range is some sort of failure, and a response in the 5xx range usually indicates an internal system error that cannot be resolved by the user. Response codes are listed with the method.
ErrorResponse
Name | Description |
---|---|
code integer |
The HTTP response code. |
error string |
General description of an error. |
ErrorHTML
Name | Description |
---|---|
Error string |
HTML description of the error. |
ErrorInfo
Information about what might have caused a failure, such as an image that is too large. Not returned when there is no error.
Name | Description |
---|---|
code integer |
HTTP response code. |
description string |
Human-readable error description. For example, File size limit exceeded . |
error_id string |
Codified error string. For example, limit_exceeded . |
The Java SDK generates an exception for any unsuccessful method invocation. All methods that accept an argument can also throw an IllegalArgumentException
.
Exception | Description |
---|---|
IllegalArgumentException | An invalid argument was passed to the method. |
When the Java SDK receives an error response from the Visual Recognition service, it generates an exception from the com.ibm.watson.developer_cloud.service.exception
package. All service exceptions contain the following fields.
Field | Description |
---|---|
statusCode | The HTTP response code that is returned. |
message | A message that describes the error. |
When the Node SDK receives an error response from the Visual Recognition service, it creates an Error
object with information that describes the error that occurred. This error object is passed as the first parameter to the callback function for the method. The contents of the error object are as shown in the following table.
Error
Field | Description |
---|---|
code | The HTTP response code that is returned. |
message | A message that describes the error. |
The Python SDK generates an exception for any unsuccessful method invocation. When the Python SDK receives an error response from the Visual Recognition service, it generates an ApiException
with the following fields.
Field | Description |
---|---|
code | The HTTP response code that is returned. |
message | A message that describes the error. |
info | A dictionary of additional information about the error. |
When the Ruby SDK receives an error response from the Visual Recognition service, it generates an ApiException
with the following fields.
Field | Description |
---|---|
code | The HTTP response code that is returned. |
message | A message that describes the error. |
info | A dictionary of additional information about the error. |
The Go SDK generates an error for any unsuccessful service instantiation and method invocation. You can check for the error immediately. The contents of the error object are as shown in the following table.
Error
Field | Description |
---|---|
code | The HTTP response code that is returned. |
message | A message that describes the error. |
The Swift SDK returns a WatsonError
in the completionHandler
any unsuccessful method invocation. This error type is an enum that conforms to LocalizedError
and contains an errorDescription
property that returns an error message. Some of the WatsonError
cases contain associated values that reveal more information about the error.
Field | Description |
---|---|
errorDescription | A message that describes the error. |
When the .NET Standard SDK receives an error response from the Visual Recognition service, it generates a ServiceResponseException
with the following fields.
Field | Description |
---|---|
Message | A message that describes the error. |
CodeDescription | The HTTP response code that is returned. |
When the Unity SDK receives an error response from the Visual Recognition service, it generates an IBMError
with the following fields.
Field | Description |
---|---|
Url | The URL that generated the error. |
StatusCode | The HTTP response code returned. |
ErrorMessage | A message that describes the error. |
Response | The contents of the response from the server. |
ResponseHeaders | A dictionary of headers returned by the request. |
Example error handling
try {
// Invoke a method
} catch (NotFoundException e) {
// Handle Not Found (404) exception
} catch (RequestTooLargeException e) {
// Handle Request Too Large (413) exception
} catch (ServiceResponseException e) {
// Base class for all exceptions caused by error responses from the service
System.out.println("Service returned status code "
+ e.getStatusCode() + ": " + e.getMessage());
}
Example error handling
visualRecognition.method(params)
.catch(err => {
console.log('error:', err);
});
Example error handling
from ibm_watson import ApiException
try:
# Invoke a method
except ApiException as ex:
print "Method failed with status code " + str(ex.code) + ": " + ex.message
Example error handling
require "ibm_watson"
begin
# Invoke a method
rescue IBMWatson::ApiException => ex
print "Method failed with status code #{ex.code}: #{ex.error}"
end
Example error handling
import "github.com/watson-developer-cloud/go-sdk/visualrecognitionv3"
// Instantiate a service
visualRecognition, visualRecognitionErr := visualrecognitionv3.NewVisualRecognitionV3(options)
// Check for errors
if visualRecognitionErr != nil {
panic(visualRecognitionErr)
}
// Call a method
result, response, responseErr := visualRecognition.MethodName(&methodOptions)
// Check for errors
if responseErr != nil {
panic(responseErr)
}
Example error handling
visualRecognition.method() {
response, error in
if let error = error {
switch error {
case let .http(statusCode, message, metadata):
switch statusCode {
case .some(404):
// Handle Not Found (404) exception
print("Not found")
case .some(413):
// Handle Request Too Large (413) exception
print("Payload too large")
default:
if let statusCode = statusCode {
print("Error - code: \(statusCode), \(message ?? "")")
}
}
default:
print(error.localizedDescription)
}
return
}
guard let result = response?.result else {
print(error?.localizedDescription ?? "unknown error")
return
}
print(result)
}
Example error handling
try
{
// Invoke a method
}
catch(ServiceResponseException e)
{
Console.WriteLine("Error: " + e.Message);
}
catch (Exception e)
{
Console.WriteLine("Error: " + e.Message);
}
Example error handling
// Invoke a method
visualRecognition.MethodName(Callback, Parameters);
// Check for errors
private void Callback(DetailedResponse<ExampleResponse> response, IBMError error)
{
if (error == null)
{
Log.Debug("ExampleCallback", "Response received: {0}", response.Response);
}
else
{
Log.Debug("ExampleCallback", "Error received: {0}, {1}, {3}", error.StatusCode, error.ErrorMessage, error.Response);
}
}
Additional headers
Some Watson services accept special parameters in headers that are passed with the request.
You can pass request header parameters in all requests or in a single request to the service.
To pass a request header, use the --header
(-H
) option with a curl request.
To pass header parameters with every request, use the setDefaultHeaders
method of the service object. See Data collection for an example use of this method.
To pass header parameters in a single request, use the addHeader
method as a modifier on the request before you execute it.
To pass header parameters with every request, specify the headers
parameter when you create the service object. See Data collection for an example use of this method.
To pass header parameters in a single request, use the headers
method as a modifier on the request before you execute it.
To pass header parameters with every request, specify the set_default_headers
method of the service object. See Data collection for an example use of this method.
To pass header parameters in a single request, include headers
as a dict
in the request.
To pass header parameters with every request, specify the add_default_headers
method of the service object. See Data collection for an example use of this method.
To pass header parameters in a single request, specify the headers
method as a chainable method in the request.
To pass header parameters with every request, specify the SetDefaultHeaders
method of the service object. See Data collection for an example use of this method.
To pass header parameters in a single request, specify the Headers
as a map
in the request.
To pass header parameters with every request, add them to the defaultHeaders
property of the service object. See Data collection for an example use of this method.
To pass header parameters in a single request, pass the headers
parameter to the request method.
To pass header parameters in a single request, use the WithHeader()
method as a modifier on the request before you execute it. See Data collection for an example use of this method.
To pass header parameters in a single request, use the WithHeader()
method as a modifier on the request before you execute it.
Example header parameter in a request
curl -X {request_method} -H "Request-Header: {header_value}" "{url}/v3/{method}"
Example header parameter in a request
ReturnType returnValue = visualRecognition.methodName(parameters)
.addHeader("Custom-Header", "{header_value}")
.execute();
Example header parameter in a request
const parameters = {
{parameters}
};
visualRecognition.methodName(
parameters,
headers: {
'Custom-Header': '{header_value}'
})
.then(result => {
console.log(response);
})
.catch(err => {
console.log('error:', err);
});
Example header parameter in a request
response = visual_recognition.methodName(
parameters,
headers = {
'Custom-Header': '{header_value}'
})
Example header parameter in a request
response = visual_recognition.headers(
"Custom-Header" => "{header_value}"
).methodName(parameters)
Example header parameter in a request
result, response, responseErr := visualRecognition.MethodName(
&methodOptions{
Headers: map[string]string{
"Accept": "application/json",
},
},
)
Example header parameter in a request
let customHeader: [String: String] = ["Custom-Header": "{header_value}"]
visualRecognition.methodName(parameters, headers: customHeader) {
response, error in
}
Example header parameter in a request
IamAuthenticator authenticator = new IamAuthenticator(
apikey: "{apikey}"
);
VisualRecognitionService visualRecognition = new VisualRecognitionService("{version}", authenticator);
visualRecognition.SetServiceUrl("{url}");
visualRecognition.WithHeader("Custom-Header", "header_value");
Example header parameter in a request
var authenticator = new IamAuthenticator(
apikey: "{apikey}"
);
while (!authenticator.CanAuthenticate())
yield return null;
var visualRecognition = new VisualRecognitionService("{version}", authenticator);
visualRecognition.SetServiceUrl("{url}");
visualRecognition.WithHeader("Custom-Header", "header_value");
Response details
The Visual Recognition service might return information to the application in response headers.
To access all response headers that the service returns, include the --include
(-i
) option with a curl request. To see detailed response data for the request, including request headers, response headers, and extra debugging information, include the --verbose
(-v
) option with the request.
Example request to access response headers
curl -X {request_method} {authentication_method} --include "{url}/v3/{method}"
To access information in the response headers, use one of the request methods that returns details with the response: executeWithDetails()
, enqueueWithDetails()
, or rxWithDetails()
. These methods return a Response<T>
object, where T
is the expected response model. Use the getResult()
method to access the response object for the method, and use the getHeaders()
method to access information in response headers.
Example request to access response headers
Response<ReturnType> response = visualRecognition.methodName(parameters)
.executeWithDetails();
// Access response from methodName
ReturnType returnValue = response.getResult();
// Access information in response headers
Headers responseHeaders = response.getHeaders();
All response data is available in the Response<T>
object that is returned by each method. To access information in the response
object, use the following properties.
Property | Description |
---|---|
result |
Returns the response for the service-specific method. |
headers |
Returns the response header information. |
status |
Returns the HTTP status code. |
Example request to access response headers
visualRecognition.methodName(parameters)
.then(response => {
console.log(response.headers);
})
.catch(err => {
console.log('error:', err);
});
The return value from all service methods is a DetailedResponse
object. To access information in the result object or response headers, use the following methods.
DetailedResponse
Method | Description |
---|---|
get_result() |
Returns the response for the service-specific method. |
get_headers() |
Returns the response header information. |
get_status_code() |
Returns the HTTP status code. |
Example request to access response headers
visual_recognition.set_detailed_response(True)
response = visual_recognition.methodName(parameters)
# Access response from methodName
print(json.dumps(response.get_result(), indent=2))
# Access information in response headers
print(response.get_headers())
# Access HTTP response status
print(response.get_status_code())
The return value from all service methods is a DetailedResponse
object. To access information in the response
object, use the following properties.
DetailedResponse
Property | Description |
---|---|
result |
Returns the response for the service-specific method. |
headers |
Returns the response header information. |
status |
Returns the HTTP status code. |
Example request to access response headers
response = visual_recognition.methodName(parameters)
# Access response from methodName
print response.result
# Access information in response headers
print response.headers
# Access HTTP response status
print response.status
The return value from all service methods is a DetailedResponse
object. To access information in the response
object or response headers, use the following methods.
DetailedResponse
Method | Description |
---|---|
GetResult() |
Returns the response for the service-specific method. |
GetHeaders() |
Returns the response header information. |
GetStatusCode() |
Returns the HTTP status code. |
Example request to access response headers
import (
"github.com/IBM/go-sdk-core/core"
"github.com/watson-developer-cloud/go-sdk/visualrecognitionv3"
)
result, response, responseErr := visualRecognition.MethodName(
&methodOptions{})
// Access result
core.PrettyPrint(response.GetResult(), "Result ")
// Access response headers
core.PrettyPrint(response.GetHeaders(), "Headers ")
// Access status code
core.PrettyPrint(response.GetStatusCode(), "Status Code ")
All response data is available in the WatsonResponse<T>
object that is returned in each method's completionHandler
.
Example request to access response headers
visualRecognition.methodName(parameters) {
response, error in
guard let result = response?.result else {
print(error?.localizedDescription ?? "unknown error")
return
}
print(result) // The data returned by the service
print(response?.statusCode)
print(response?.headers)
}
The response contains fields for response headers, response JSON, and the status code.
DetailedResponse
Property | Description |
---|---|
Result |
Returns the result for the service-specific method. |
Response |
Returns the raw JSON response for the service-specific method. |
Headers |
Returns the response header information. |
StatusCode |
Returns the HTTP status code. |
Example request to access response headers
var results = visualRecognition.MethodName(parameters);
var result = results.Result; // The result object
var responseHeaders = results.Headers; // The response headers
var responseJson = results.Response; // The raw response JSON
var statusCode = results.StatusCode; // The response status code
The response contains fields for response headers, response JSON, and the status code.
DetailedResponse
Property | Description |
---|---|
Result |
Returns the result for the service-specific method. |
Response |
Returns the raw JSON response for the service-specific method. |
Headers |
Returns the response header information. |
StatusCode |
Returns the HTTP status code. |
Example request to access response headers
private void Example()
{
visualRecognition.MethodName(Callback, Parameters);
}
private void Callback(DetailedResponse<ResponseType> response, IBMError error)
{
var result = response.Result; // The result object
var responseHeaders = response.Headers; // The response headers
var responseJson = reresponsesults.Response; // The raw response JSON
var statusCode = response.StatusCode; // The response status code
}
Data labels
You can remove customer data if you associate the customer and the data when you send the information to a service. First, you label the data with a customer ID, and then you can delete the data by the ID.
Use the
X-Watson-Metadata
header to associate a customer ID with the data. By adding a customer ID to a request, you indicate that it contains data that belongs to that customer.Specify a random or generic string for the customer ID. Do not include personal data, such as an email address. Pass the string
customer_id={id}
as the argument of the header.- Use the Delete labeled data method to remove data that is associated with a customer ID.
Labeling data is used only by methods that accept customer data. For more information about Visual Recognition and labeling data, see Information security.
For more information about how to pass headers, see Additional headers.
Data collection
By default, Visual Recognition service instances that are not part of Premium plans log requests and their results. Logging is done only to improve the services for future users. The logged data is not shared or made public. Logging is disabled for services that are part of Premium plans.
To prevent IBM usage of your data for an API request, set the X-Watson-Learning-Opt-Out header parameter to true
.
You must set the header on each request that you do not want IBM to access for general service improvements.
You can set the header by using the setDefaultHeaders
method of the service object.
You can set the header by using the headers
parameter when you create the service object.
You can set the header by using the set_default_headers
method of the service object.
You can set the header by using the add_default_headers
method of the service object.
You can set the header by using the SetDefaultHeaders
method of the service object.
You can set the header by adding it to the defaultHeaders
property of the service object.
You can set the header by using the WithHeader()
method of the service object.
Example request
curl -u "apikey:{apikey}" -H "X-Watson-Learning-Opt-Out: true" "{url}/{method}"
Example request
Map<String, String> headers = new HashMap<String, String>();
headers.put("X-Watson-Learning-Opt-Out", "true");
visualRecognition.setDefaultHeaders(headers);
Example request
const VisualRecognitionV3 = require('ibm-watson/visual-recognition/v3');
const { IamAuthenticator } = require('ibm-watson/auth');
const visualRecognition = new VisualRecognitionV3({
version: '{version}',
authenticator: new IamAuthenticator({
apikey: '{apikey}',
}),
serviceUrl: '{url}',
headers: {
'X-Watson-Learning-Opt-Out': 'true'
}
});
Example request
visual_recognition.set_default_headers({'x-watson-learning-opt-out': "true"})
Example request
visual_recognition.add_default_headers(headers: {"x-watson-learning-opt-out" => "true"})
Example request
import "net/http"
headers := http.Header{}
headers.Add("x-watson-learning-opt-out", "true")
visualRecognition.SetDefaultHeaders(headers)
Example request
visualRecognition.defaultHeaders["X-Watson-Learning-Opt-Out"] = "true"
Example request
IamAuthenticator authenticator = new IamAuthenticator(
apikey: "{apikey}"
);
VisualRecognitionService visualRecognition = new VisualRecognitionService("{version}", authenticator);
visualRecognition.SetServiceUrl("{url}");
visualRecognition.WithHeader("X-Watson-Learning-Opt-Out", "true");
Example request
var authenticator = new IamAuthenticator(
apikey: "{apikey}"
);
while (!authenticator.CanAuthenticate())
yield return null;
var visualRecognition = new VisualRecognitionService("{version}", authenticator);
visualRecognition.SetServiceUrl("{url}");
visualRecognition.WithHeader("X-Watson-Learning-Opt-Out", "true");
Synchronous and asynchronous requests
The Java SDK supports both synchronous (blocking) and asynchronous (non-blocking) execution of service methods. All service methods implement the ServiceCall interface.
- To call a method synchronously, use the
execute
method of theServiceCall
interface. You can call theexecute
method directly from an instance of the service. - To call a method asynchronously, use the
enqueue
method of theServiceCall
interface to receive a callback when the response arrives. The ServiceCallback interface of the method's argument providesonResponse
andonFailure
methods that you override to handle the callback.
The Ruby SDK supports both synchronous (blocking) and asynchronous (non-blocking) execution of service methods. All service methods implement the Concurrent::Async module. When you use the synchronous or asynchronous methods, an IVar object is returned. You access the DetailedResponse
object by calling ivar_object.value
.
For more information about the Ivar object, see the IVar class docs.
To call a method synchronously, either call the method directly or use the
.await
chainable method of theConcurrent::Async
module.Calling a method directly (without
.await
) returns aDetailedResponse
object.- To call a method asynchronously, use the
.async
chainable method of theConcurrent::Async
module.
You can call the .await
and .async
methods directly from an instance of the service.
Example synchronous request
ReturnType returnValue = visualRecognition.method(parameters).execute();
Example asynchronous request
visualRecognition.method(parameters).enqueue(new ServiceCallback<ReturnType>() {
@Override public void onResponse(ReturnType response) {
. . .
}
@Override public void onFailure(Exception e) {
. . .
}
});
Example synchronous request
response = visual_recognition.method_name(parameters)
or
response = visual_recognition.await.method_name(parameters)
Example asynchronous request
response = visual_recognition.async.method_name(parameters)
Related information
- Visual Recognition docs
- Release notes
- Javadoc for VisualRecognition
- Javadoc for sdk-core
Methods
Request
Custom Headers
The desired language of parts of the response. See the response for details.
Allowable values: [
en
,ar
,de
,es
,fr
,it
,ja
,ko
,pt-br
,zh-cn
,zh-tw
]Default:
en
Query Parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.The URL of an image (.gif, .jpg, .png, .tif). The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The maximum image size is 10 MB. Redirects are followed, so you can use a shortened URL.
The categories of classifiers to apply. The classifier_ids parameter overrides owners, so make sure that classifier_ids is empty.
- Use
IBM
to classify against thedefault
general classifier. You get the same result if both classifier_ids and owners parameters are empty. - Use
me
to classify against all your custom classifiers. However, for better performance use classifier_ids to specify the specific custom classifiers to apply. - Use both
IBM
andme
to analyze the image against both classifier categories.
Allowable values: [
IBM
,me
]- Use
Which classifiers to apply. Overrides the owners parameter. You can specify both custom and built-in classifier IDs. The built-in
default
classifier is used if both classifier_ids and owners parameters are empty.The following built-in classifier_ids require no training:
default
: Returns classes from thousands of general tags.food
: Enhances specificity and accuracy for images of food items.explicit
: Evaluates whether the image might be pornographic.
The minimum score a class must have to be displayed in the response. Set the threshold to
0.0
to return all identified classesConstraints: 0 ≤ value ≤ 1
Default:
0.5
curl -u "apikey:{apikey}" "{url}/v3/classify?url=https://watson-developer-cloud.github.io/doc-tutorial-downloads/visual-recognition/fruitbowl.jpg&version=2018-03-19"
Response
Results for all images.
Classified images.
Number of custom classes identified in the images.
Number of images processed for the API call.
Information about what might cause less than optimal output. For example, a request sent with a corrupt .zip file and a list of image URLs will still complete, but does not return the expected output. Not returned when there is no warning.
Status Code
success
Invalid request due to user input, for example:
- Bad header parameter
- Invalid output language
- No input images
- The size of the image file in the request is larger than the maximum supported size
No API key or the key is not valid.
{ "images": [ { "classifiers": [ { "classifier_id": "default", "name": "default", "classes": [ { "class": "diet (food)", "score": 0.571, "type_hierarchy": "/food/diet (food)" }, { "class": "food", "score": 0.571 }, { "class": "fruit", "score": 0.825 }, { "class": "banana", "score": 0.518, "type_hierarchy": "/fruit/banana" }, { "class": "Granny Smith", "score": 0.5, "type_hierarchy": "/fruit/pome/apple/eating apple/Granny Smith" }, { "class": "eating apple", "score": 0.64 }, { "class": "apple", "score": 0.655 }, { "class": "pome", "score": 0.669 }, { "class": "Golden Delicious", "score": 0.5, "type_hierarchy": "/fruit/pome/apple/eating apple/Golden Delicious" }, { "class": "olive color", "score": 0.942 }, { "class": "lemon yellow color", "score": 0.9 } ] } ], "source_url": "https://watson-developer-cloud.github.io/doc-tutorial-downloads/visual-recognition/fruitbowl.jpg", "resolved_url": "https://watson-developer-cloud.github.io/doc-tutorial-downloads/visual-recognition/fruitbowl.jpg" } ], "images_processed": 1, "custom_classes": 0 }
{ "code": "400", "error": "Error: Too many images in collection" }
{ "code": "401", "error": "Unauthorized" }
Classify images
Classify images with built-in or custom classifiers.
Classify images with built-in or custom classifiers.
Classify images with built-in or custom classifiers.
Classify images with built-in or custom classifiers.
Classify images with built-in or custom classifiers.
Classify images with built-in or custom classifiers.
Classify images with built-in or custom classifiers.
Classify images with built-in or custom classifiers.
Classify images with built-in or custom classifiers.
POST /v3/classify
(visualRecognition *VisualRecognitionV3) Classify(classifyOptions *ClassifyOptions) (result *ClassifiedImages, response *core.DetailedResponse, err error)
(visualRecognition *VisualRecognitionV3) ClassifyWithContext(ctx context.Context, classifyOptions *ClassifyOptions) (result *ClassifiedImages, response *core.DetailedResponse, err error)
ServiceCall<ClassifiedImages> classify(ClassifyOptions classifyOptions)
classify(params)
classify(self,
*,
images_file: BinaryIO = None,
images_filename: str = None,
images_file_content_type: str = None,
url: str = None,
threshold: float = None,
owners: List[str] = None,
classifier_ids: List[str] = None,
accept_language: str = None,
**kwargs
) -> DetailedResponse
classify(images_file: nil, images_filename: nil, images_file_content_type: nil, url: nil, threshold: nil, owners: nil, classifier_ids: nil, accept_language: nil)
func classify(
imagesFile: Data? = nil,
imagesFilename: String? = nil,
imagesFileContentType: String? = nil,
url: String? = nil,
threshold: Double? = nil,
owners: [String]? = nil,
classifierIDs: [String]? = nil,
acceptLanguage: String? = nil,
headers: [String: String]? = nil,
completionHandler: @escaping (WatsonResponse<ClassifiedImages>?, WatsonError?) -> Void)
Classify(System.IO.MemoryStream imagesFile = null, string imagesFilename = null, string imagesFileContentType = null, string url = null, float? threshold = null, List<string> owners = null, List<string> classifierIds = null, string acceptLanguage = null)
Classify(Callback<ClassifiedImages> callback, System.IO.MemoryStream imagesFile = null, string imagesFilename = null, string imagesFileContentType = null, string url = null, float? threshold = null, List<string> owners = null, List<string> classifierIds = null, string acceptLanguage = null)
Request
Instantiate the ClassifyOptions
struct and set the fields to provide parameter values for the Classify
method.
Use the ClassifyOptions.Builder
to create a ClassifyOptions
object that contains the parameter values for the classify
method.
Custom Headers
The desired language of parts of the response. See the response for details.
Allowable values: [
en
,ar
,de
,es
,fr
,it
,ja
,ko
,pt-br
,zh-cn
,zh-tw
]Default:
en
Query Parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.
Form Parameters
An image file (.gif, .jpg, .png, .tif) or .zip file with images. Maximum image size is 10 MB. Include no more than 20 images and limit the .zip file to 100 MB. Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.
You can also include an image with the url parameter.
The URL of an image (.gif, .jpg, .png, .tif) to analyze. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The maximum image size is 10 MB.
You can also include images with the images_file parameter.
The minimum score a class must have to be displayed in the response. Set the threshold to
0.0
to return all identified classes.Default:
0.5
The categories of classifiers to apply. The classifier_ids parameter overrides owners, so make sure that classifier_ids is empty.
- Use
IBM
to classify against thedefault
general classifier. You get the same result if both classifier_ids and owners parameters are empty. - Use
me
to classify against all your custom classifiers. However, for better performance use classifier_ids to specify the specific custom classifiers to apply. - Use both
IBM
andme
to analyze the image against both classifier categories.
- Use
Which classifiers to apply. Overrides the owners parameter. You can specify both custom and built-in classifier IDs. The built-in
default
classifier is used if both classifier_ids and owners parameters are empty.The following built-in classifier IDs require no training:
default
: Returns classes from thousands of general tags.food
: Enhances specificity and accuracy for images of food items.explicit
: Evaluates whether the image might be pornographic.
WithContext method only
A context.Context instance that you can use to specify a timeout for the operation or to cancel an in-flight request.
The Classify options.
An image file (.gif, .jpg, .png, .tif) or .zip file with images. Maximum image size is 10 MB. Include no more than 20 images and limit the .zip file to 100 MB. Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.
You can also include an image with the url parameter.
The filename for imagesFile.
The content type of imagesFile.
The URL of an image (.gif, .jpg, .png, .tif) to analyze. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The maximum image size is 10 MB.
You can also include images with the images_file parameter.
The minimum score a class must have to be displayed in the response. Set the threshold to
0.0
to return all identified classes.The categories of classifiers to apply. The classifier_ids parameter overrides owners, so make sure that classifier_ids is empty.
- Use
IBM
to classify against thedefault
general classifier. You get the same result if both classifier_ids and owners parameters are empty. - Use
me
to classify against all your custom classifiers. However, for better performance use classifier_ids to specify the specific custom classifiers to apply. - Use both
IBM
andme
to analyze the image against both classifier categories.
- Use
Which classifiers to apply. Overrides the owners parameter. You can specify both custom and built-in classifier IDs. The built-in
default
classifier is used if both classifier_ids and owners parameters are empty.The following built-in classifier IDs require no training:
default
: Returns classes from thousands of general tags.food
: Enhances specificity and accuracy for images of food items.explicit
: Evaluates whether the image might be pornographic.
The desired language of parts of the response. See the response for details.
Allowable values: [
en
,ar
,de
,es
,fr
,it
,ja
,ko
,pt-br
,zh-cn
,zh-tw
]Default:
en
The classify options.
An image file (.gif, .jpg, .png, .tif) or .zip file with images. Maximum image size is 10 MB. Include no more than 20 images and limit the .zip file to 100 MB. Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.
You can also include an image with the url parameter.
The filename for imagesFile.
The content type of imagesFile. Values for this parameter can be obtained from the HttpMediaType class.
The URL of an image (.gif, .jpg, .png, .tif) to analyze. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The maximum image size is 10 MB.
You can also include images with the images_file parameter.
The minimum score a class must have to be displayed in the response. Set the threshold to
0.0
to return all identified classes.The categories of classifiers to apply. The classifier_ids parameter overrides owners, so make sure that classifier_ids is empty.
- Use
IBM
to classify against thedefault
general classifier. You get the same result if both classifier_ids and owners parameters are empty. - Use
me
to classify against all your custom classifiers. However, for better performance use classifier_ids to specify the specific custom classifiers to apply. - Use both
IBM
andme
to analyze the image against both classifier categories.
- Use
Which classifiers to apply. Overrides the owners parameter. You can specify both custom and built-in classifier IDs. The built-in
default
classifier is used if both classifier_ids and owners parameters are empty.The following built-in classifier IDs require no training:
default
: Returns classes from thousands of general tags.food
: Enhances specificity and accuracy for images of food items.explicit
: Evaluates whether the image might be pornographic.
The desired language of parts of the response. See the response for details.
Allowable values: [
en
,ar
,de
,es
,fr
,it
,ja
,ko
,pt-br
,zh-cn
,zh-tw
]Default:
en
parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.An image file (.gif, .jpg, .png, .tif) or .zip file with images. Maximum image size is 10 MB. Include no more than 20 images and limit the .zip file to 100 MB. Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.
You can also include an image with the url parameter.
The filename for imagesFile.
The content type of imagesFile.
The URL of an image (.gif, .jpg, .png, .tif) to analyze. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The maximum image size is 10 MB.
You can also include images with the images_file parameter.
The minimum score a class must have to be displayed in the response. Set the threshold to
0.0
to return all identified classes.The categories of classifiers to apply. The classifier_ids parameter overrides owners, so make sure that classifier_ids is empty.
- Use
IBM
to classify against thedefault
general classifier. You get the same result if both classifier_ids and owners parameters are empty. - Use
me
to classify against all your custom classifiers. However, for better performance use classifier_ids to specify the specific custom classifiers to apply. - Use both
IBM
andme
to analyze the image against both classifier categories.
- Use
Which classifiers to apply. Overrides the owners parameter. You can specify both custom and built-in classifier IDs. The built-in
default
classifier is used if both classifier_ids and owners parameters are empty.The following built-in classifier IDs require no training:
default
: Returns classes from thousands of general tags.food
: Enhances specificity and accuracy for images of food items.explicit
: Evaluates whether the image might be pornographic.
The desired language of parts of the response. See the response for details.
Allowable values: [
en
,ar
,de
,es
,fr
,it
,ja
,ko
,pt-br
,zh-cn
,zh-tw
]Default:
en
parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.An image file (.gif, .jpg, .png, .tif) or .zip file with images. Maximum image size is 10 MB. Include no more than 20 images and limit the .zip file to 100 MB. Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.
You can also include an image with the url parameter.
The filename for images_file.
The content type of images_file.
The URL of an image (.gif, .jpg, .png, .tif) to analyze. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The maximum image size is 10 MB.
You can also include images with the images_file parameter.
The minimum score a class must have to be displayed in the response. Set the threshold to
0.0
to return all identified classes.The categories of classifiers to apply. The classifier_ids parameter overrides owners, so make sure that classifier_ids is empty.
- Use
IBM
to classify against thedefault
general classifier. You get the same result if both classifier_ids and owners parameters are empty. - Use
me
to classify against all your custom classifiers. However, for better performance use classifier_ids to specify the specific custom classifiers to apply. - Use both
IBM
andme
to analyze the image against both classifier categories.
- Use
Which classifiers to apply. Overrides the owners parameter. You can specify both custom and built-in classifier IDs. The built-in
default
classifier is used if both classifier_ids and owners parameters are empty.The following built-in classifier IDs require no training:
default
: Returns classes from thousands of general tags.food
: Enhances specificity and accuracy for images of food items.explicit
: Evaluates whether the image might be pornographic.
The desired language of parts of the response. See the response for details.
Allowable values: [
en
,ar
,de
,es
,fr
,it
,ja
,ko
,pt-br
,zh-cn
,zh-tw
]Default:
en
parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.An image file (.gif, .jpg, .png, .tif) or .zip file with images. Maximum image size is 10 MB. Include no more than 20 images and limit the .zip file to 100 MB. Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.
You can also include an image with the url parameter.
The filename for images_file.
The content type of images_file.
The URL of an image (.gif, .jpg, .png, .tif) to analyze. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The maximum image size is 10 MB.
You can also include images with the images_file parameter.
The minimum score a class must have to be displayed in the response. Set the threshold to
0.0
to return all identified classes.The categories of classifiers to apply. The classifier_ids parameter overrides owners, so make sure that classifier_ids is empty.
- Use
IBM
to classify against thedefault
general classifier. You get the same result if both classifier_ids and owners parameters are empty. - Use
me
to classify against all your custom classifiers. However, for better performance use classifier_ids to specify the specific custom classifiers to apply. - Use both
IBM
andme
to analyze the image against both classifier categories.
- Use
Which classifiers to apply. Overrides the owners parameter. You can specify both custom and built-in classifier IDs. The built-in
default
classifier is used if both classifier_ids and owners parameters are empty.The following built-in classifier IDs require no training:
default
: Returns classes from thousands of general tags.food
: Enhances specificity and accuracy for images of food items.explicit
: Evaluates whether the image might be pornographic.
The desired language of parts of the response. See the response for details.
Allowable values: [
en
,ar
,de
,es
,fr
,it
,ja
,ko
,pt-br
,zh-cn
,zh-tw
]Default:
en
parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.An image file (.gif, .jpg, .png, .tif) or .zip file with images. Maximum image size is 10 MB. Include no more than 20 images and limit the .zip file to 100 MB. Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.
You can also include an image with the url parameter.
The filename for imagesFile.
The content type of imagesFile.
The URL of an image (.gif, .jpg, .png, .tif) to analyze. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The maximum image size is 10 MB.
You can also include images with the images_file parameter.
The minimum score a class must have to be displayed in the response. Set the threshold to
0.0
to return all identified classes.The categories of classifiers to apply. The classifier_ids parameter overrides owners, so make sure that classifier_ids is empty.
- Use
IBM
to classify against thedefault
general classifier. You get the same result if both classifier_ids and owners parameters are empty. - Use
me
to classify against all your custom classifiers. However, for better performance use classifier_ids to specify the specific custom classifiers to apply. - Use both
IBM
andme
to analyze the image against both classifier categories.
- Use
Which classifiers to apply. Overrides the owners parameter. You can specify both custom and built-in classifier IDs. The built-in
default
classifier is used if both classifier_ids and owners parameters are empty.The following built-in classifier IDs require no training:
default
: Returns classes from thousands of general tags.food
: Enhances specificity and accuracy for images of food items.explicit
: Evaluates whether the image might be pornographic.
The desired language of parts of the response. See the response for details.
Allowable values: [
en
,ar
,de
,es
,fr
,it
,ja
,ko
,pt-br
,zh-cn
,zh-tw
]Default:
en
parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.An image file (.gif, .jpg, .png, .tif) or .zip file with images. Maximum image size is 10 MB. Include no more than 20 images and limit the .zip file to 100 MB. Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.
You can also include an image with the url parameter.
The filename for imagesFile.
The content type of imagesFile.
The URL of an image (.gif, .jpg, .png, .tif) to analyze. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The maximum image size is 10 MB.
You can also include images with the images_file parameter.
The minimum score a class must have to be displayed in the response. Set the threshold to
0.0
to return all identified classes.The categories of classifiers to apply. The classifier_ids parameter overrides owners, so make sure that classifier_ids is empty.
- Use
IBM
to classify against thedefault
general classifier. You get the same result if both classifier_ids and owners parameters are empty. - Use
me
to classify against all your custom classifiers. However, for better performance use classifier_ids to specify the specific custom classifiers to apply. - Use both
IBM
andme
to analyze the image against both classifier categories.
- Use
Which classifiers to apply. Overrides the owners parameter. You can specify both custom and built-in classifier IDs. The built-in
default
classifier is used if both classifier_ids and owners parameters are empty.The following built-in classifier IDs require no training:
default
: Returns classes from thousands of general tags.food
: Enhances specificity and accuracy for images of food items.explicit
: Evaluates whether the image might be pornographic.
The desired language of parts of the response. See the response for details.
Allowable values: [
en
,ar
,de
,es
,fr
,it
,ja
,ko
,pt-br
,zh-cn
,zh-tw
]Default:
en
parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.An image file (.gif, .jpg, .png, .tif) or .zip file with images. Maximum image size is 10 MB. Include no more than 20 images and limit the .zip file to 100 MB. Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.
You can also include an image with the url parameter.
The filename for imagesFile.
The content type of imagesFile.
The URL of an image (.gif, .jpg, .png, .tif) to analyze. The minimum recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The maximum image size is 10 MB.
You can also include images with the images_file parameter.
The minimum score a class must have to be displayed in the response. Set the threshold to
0.0
to return all identified classes.The categories of classifiers to apply. The classifier_ids parameter overrides owners, so make sure that classifier_ids is empty.
- Use
IBM
to classify against thedefault
general classifier. You get the same result if both classifier_ids and owners parameters are empty. - Use
me
to classify against all your custom classifiers. However, for better performance use classifier_ids to specify the specific custom classifiers to apply. - Use both
IBM
andme
to analyze the image against both classifier categories.
- Use
Which classifiers to apply. Overrides the owners parameter. You can specify both custom and built-in classifier IDs. The built-in
default
classifier is used if both classifier_ids and owners parameters are empty.The following built-in classifier IDs require no training:
default
: Returns classes from thousands of general tags.food
: Enhances specificity and accuracy for images of food items.explicit
: Evaluates whether the image might be pornographic.
The desired language of parts of the response. See the response for details.
Allowable values: [
en
,ar
,de
,es
,fr
,it
,ja
,ko
,pt-br
,zh-cn
,zh-tw
]Default:
en
curl -X POST -u "apikey:{apikey}" -F "images_file=@fruitbowl.jpg" -F "threshold=0.6" -F "owners=me" "{url}/v3/classify?version=2018-03-19"
Download example image fruitbowl.jpg
curl -X POST -u "apikey:{apikey}" -F "images_file=@fruitbowl.jpg" -F "classifier_ids=food" "{url}/v3/classify?version=2018-03-19"
Download example image fruitbowl.jpg
IamAuthenticator authenticator = new IamAuthenticator( apikey: "{apikey}" ); VisualRecognitionService visualRecognition = new VisualRecognitionService("2018-03-19", authenticator); visualRecognition.SetServiceUrl("{url}"); DetailedResponse<ClassifiedImages> result; using (FileStream fs = File.OpenRead("./fruitbowl.jpg")) { using (MemoryStream ms = new MemoryStream()) { fs.CopyTo(ms); result = service.Classify( imagesFile: ms, imagesFilename: "fruitbowl.jpg", threshold: 0.6f, owners: new List<string>() { "me" } ); } } Console.WriteLine(result.Response);
IamAuthenticator authenticator = new IamAuthenticator( apikey: "{apikey}" ); VisualRecognitionService visualRecognition = new VisualRecognitionService("2018-03-19", authenticator); visualRecognition.SetServiceUrl("{url}"); DetailedResponse<ClassifiedImages> result; using (FileStream fs = File.OpenRead("./fruitbowl.jpg")) { using (MemoryStream ms = new MemoryStream()) { fs.CopyTo(ms); result = service.Classify( imagesFile: ms, imagesFilename: "fruitbowl.jpg", classifierIds: new List<string>() { "food" } ); } } Console.WriteLine(result.Response);
package main import ( "encoding/json" "fmt" "github.com/IBM/go-sdk-core/core" "github.com/watson-developer-cloud/go-sdk/visualrecognitionv3" "os" ) func main() { authenticator := &core.IamAuthenticator{ ApiKey: "{apikey}", } options := &visualrecognitionv3.VisualRecognitionV3Options{ Version: "2018-03-19", Authenticator: authenticator, } visualRecognition, visualRecognitionErr := visualrecognitionv3.NewVisualRecognitionV3(options) if visualRecognitionErr != nil { panic(visualRecognitionErr) } visualRecognition.SetServiceURL("{url}") imageFile, imageFileErr := os.Open("./fruitbowl.jpg") if imageFileErr != nil { panic(imageFileErr) } defer imageFile.Close() result, response, responseErr := visualRecognition.Classify( &visualrecognitionv3.ClassifyOptions{ ImagesFile: imageFile, Threshold: core.Float32Ptr(0.6), Owners: []string{"me"}, }, ) if responseErr != nil { panic(responseErr) } b, _ := json.MarshalIndent(result, "", " ") fmt.Println(string(b)) }
Download example image fruitbowl.jpg
package main import ( "encoding/json" "fmt" "github.com/IBM/go-sdk-core/core" "github.com/watson-developer-cloud/go-sdk/visualrecognitionv3" "os" ) func main() { authenticator := &core.IamAuthenticator{ ApiKey: "{apikey}", } options := &visualrecognitionv3.VisualRecognitionV3Options{ Version: "2018-03-19", Authenticator: authenticator, } visualRecognition, visualRecognitionErr := visualrecognitionv3.NewVisualRecognitionV3(options) if visualRecognitionErr != nil { panic(visualRecognitionErr) } imageFile, imageFileErr := os.Open("./fruitbowl.jpg") if imageFileErr != nil { panic(imageFileErr) } defer imageFile.Close() result, response, responseErr := visualRecognition.Classify( &visualrecognitionv3.ClassifyOptions{ ImagesFile: imageFile, ClassifierIds: []string{"food"}, }, ) if responseErr != nil { panic(responseErr) } result := visualRecognition.GetClassifyResult(response) b, _ := json.MarshalIndent(result, "", " ") fmt.Println(string(b)) }
IamAuthenticator authenticator = new IamAuthenticator("{apikey}"); VisualRecognition visualRecognition = new VisualRecognition("2018-03-19", authenticator); visualRecognition.setServiceUrl("{url}"); InputStream imagesStream = new FileInputStream("./fruitbowl.jpg"); ClassifyOptions classifyOptions = new ClassifyOptions.Builder() .imagesFile(imagesStream) .classifierIds(Arrays.asList("food")) .build(); ClassifiedImages result = visualRecognition.classify(classifyOptions).execute().getResult(); System.out.println(result);x
Download example image fruitbowl.jpg
IamAuthenticator authenticator = new IamAuthenticator("{apikey}"); VisualRecognition visualRecognition = new VisualRecognition("2018-03-19", authenticator); visualRecognition.setServiceUrl("{url}"); InputStream imagesStream = new FileInputStream("./fruitbowl.jpg"); ClassifyOptions classifyOptions = new ClassifyOptions.Builder() .imagesFile(imagesStream) .classifierIds(Arrays.asList("food")) .build(); ClassifiedImages result = visualRecognition.classify(classifyOptions).execute().getResult(); System.out.println(result);
const fs = require('fs'); const VisualRecognitionV3 = require('ibm-watson/visual-recognition/v3'); const { IamAuthenticator } = require('ibm-watson/auth'); const visualRecognition = new VisualRecognitionV3({ version: '2018-03-19', authenticator: new IamAuthenticator({ apikey: '{apikey}', }), serviceUrl: '{url}', }); const classifyParams = { imagesFile: fs.createReadStream('./fruitbowl.jpg'), owners: ['me'], threshold: 0.6, }; visualRecognition.classify(classifyParams) .then(response => { const classifiedImages = response.result; console.log(JSON.stringify(classifiedImages, null, 2)); }) .catch(err => { console.log('error:', err); });
Download example image fruitbowl.jpg
const fs = require('fs'); const VisualRecognitionV3 = require('ibm-watson/visual-recognition/v3'); const { IamAuthenticator } = require('ibm-watson/auth'); const visualRecognition = new VisualRecognitionV3({ version: '2018-03-19', authenticator: new IamAuthenticator({ apikey: '{apikey}', }), serviceUrl: '{url}', }); const classifyParams = { images_file: fs.createReadStream('./fruitbowl.jpg'), classifier_ids: ['food'], }; visualRecognition.classify(classifyParams) .then(response => { const classifiedImages = response.result; console.log(JSON.stringify(classifiedImages, null, 2)); }) .catch(err => { console.log('error:', err); });
import json from ibm_watson import VisualRecognitionV3 from ibm_cloud_sdk_core.authenticators import IAMAuthenticator authenticator = IAMAuthenticator('{apikey}') visual_recognition = VisualRecognitionV3( version='2018-03-19', authenticator=authenticator ) visual_recognition.set_service_url('{url}') with open('./fruitbowl.jpg', 'rb') as images_file: classes = visual_recognition.classify( images_file=images_file, threshold='0.6', owners=["me"]).get_result() print(json.dumps(classes, indent=2))
Download example image fruitbowl.jpg
import json from ibm_watson import VisualRecognitionV3 from ibm_cloud_sdk_core.authenticators import IAMAuthenticator authenticator = IAMAuthenticator('{apikey}') visual_recognition = VisualRecognitionV3( version='2018-03-19', authenticator=authenticator ) visual_recognition.set_service_url('{url}') with open('./fruitbowl.jpg', 'rb') as images_file: classes = visual_recognition.classify( images_file=images_file, classifier_ids=["food"]).get_result() print(json.dumps(classes, indent=2))
require "json" require "ibm_watson/authenticators" require "ibm_watson/visual_recognition_v3" include IBMWatson authenticator = Authenticators::IamAuthenticator.new( apikey: "{apikey}" ) visual_recognition = VisualRecognitionV3.new( version: "2018-03-19", authenticator: authenticator ) visual_recognition.service_url = "{url}" File.open("./fruitbowl.jpg") do |images_file| classes = visual_recognition.classify( images_file: images_file, threshold: "0.6", owners: ["me"] ) puts JSON.pretty_generate(classes.result) end
Download example image fruitbowl.jpg
require "json" require "ibm_watson/authenticators" require "ibm_watson/visual_recognition_v3" include IBMWatson authenticator = Authenticators::IamAuthenticator.new( apikey: "{apikey}" ) visual_recognition = VisualRecognitionV3.new( version: "2018-03-19", authenticator: authenticator ) visual_recognition.service_url = "{url}" File.open("./fruitbowl.jpg") do |images_file| classes = visual_recognition.classify( images_file: images_file, classifier_ids: ["food"] ) puts JSON.pretty_generate(classes.result) end
let authenticator = WatsonIAMAuthenticator(apiKey: "{apikey}") let visualRecognition = VisualRecognition(version: "2018-03-19", authenticator: authenticator) visualRecognition.serviceURL = "{url}" let url = Bundle.main.url(forResource: "fruitbowl", withExtension: "jpg") let fruitbowl = try? Data(contentsOf: url!) visualRecognition.classify(imagesFile: fruitbowl, threshold: 0.6, owners: ["me"]) { response, error in guard let result = response?.result else { print(error?.localizedDescription ?? "unknown error") return } print(result) }
Download example image fruitbowl.jpg
let authenticator = WatsonIAMAuthenticator(apiKey: "{apikey}") let visualRecognition = VisualRecognition(version: "2018-03-19", authenticator: authenticator) visualRecognition.serviceURL = "{url}" let url = Bundle.main.url(forResource: "fruitbowl", withExtension: "jpg") let fruitbowl = try? Data(contentsOf: url!) visualRecognition.classify(imagesFile: fruitbowl, classifierIDs: ["food"]) { response, error in guard let result = response?.result else { print(error?.localizedDescription ?? "unknown error") return } print(result) }
var authenticator = new IamAuthenticator( apikey: "{apikey}" ); while (!authenticator.CanAuthenticate()) yield return null; var visualRecognition = new VisualRecognitionService("2018-03-19", authenticator); visualRecognition.SetServiceUrl("{url}"); ClassifiedImages classifyResponse = null; using (FileStream fs = File.OpenRead("./fruitbowl.jpg")) { using (MemoryStream ms = new MemoryStream()) { fs.CopyTo(ms); VisualRecognitionService.Classify( callback: (DetailedResponse<ClassifiedImages> response, IBMError error) => { Log.Debug("VisualRecognitionServiceV3", "Classify result: {0}", response.Response); classifyResponse = response.Result; }, imagesFile: ms, imagesFilename: "fruitbowl.jpg", threshold: 0.6f, owners: new List<string>() { "me" } ); while (classifyResponse == null) yield return null; } }
var authenticator = new IamAuthenticator( apikey: "{apikey}" ); while (!authenticator.CanAuthenticate()) yield return null; var visualRecognition = new VisualRecognitionService("2018-03-19", authenticator); visualRecognition.SetServiceUrl("{url}"); ClassifiedImages classifyResponse = null; using (FileStream fs = File.OpenRead("./fruitbowl.jpg")) { using (MemoryStream ms = new MemoryStream()) { fs.CopyTo(ms); service.Classify( callback: (DetailedResponse<ClassifiedImages> response, IBMError error) => { Log.Debug("VisualRecognitionServiceV3", "Classify result: {0}", response.Response); classifyResponse = response.Result; }, imagesFile: ms, imagesFilename: "fruitbowl.jpg", owners: new List<string>() { "food" } ); while (classifyResponse == null) yield return null; } }
Response
Results for all images.
Classified images.
Number of custom classes identified in the images.
Number of images processed for the API call.
Information about what might cause less than optimal output. For example, a request sent with a corrupt .zip file and a list of image URLs will still complete, but does not return the expected output. Not returned when there is no warning.
Results for all images.
Number of custom classes identified in the images.
Number of images processed for the API call.
Classified images.
Source of the image before any redirects. Not returned when the image is uploaded.
Fully resolved URL of the image after redirects are followed. Not returned when the image is uploaded.
Relative path of the image file if uploaded directly. Not returned when the image is passed by URL.
Information about what might have caused a failure, such as an image that is too large. Not returned when there is no error.
HTTP status code.
Human-readable error description. For example,
File size limit exceeded
.Codified error string. For example,
limit_exceeded
.
Error
The classifiers.
Name of the classifier.
ID of a classifier identified in the image.
Classes within the classifier.
Name of the class.
Class names are translated in the language defined by the Accept-Language request header for the build-in classifier IDs (
default
,food
, andexplicit
). Class names of custom classifiers are not translated. The response might not be in the specified language when the requested language is not supported or when there is no translation for the class name.Confidence score for the property in the range of 0 to 1. A higher score indicates greater likelihood that the class is depicted in the image. The default threshold for returning scores from a classifier is 0.5.
Constraints: 0 ≤ value ≤ 1
Knowledge graph of the property. For example,
/fruit/pome/apple/eating apple/Granny Smith
. Included only if identified.
Classes
Classifiers
Images
Information about what might cause less than optimal output. For example, a request sent with a corrupt .zip file and a list of image URLs will still complete, but does not return the expected output. Not returned when there is no warning.
Codified warning string, such as
limit_reached
.Information about the error.
Warnings
Results for all images.
Number of custom classes identified in the images.
Number of images processed for the API call.
Classified images.
Source of the image before any redirects. Not returned when the image is uploaded.
Fully resolved URL of the image after redirects are followed. Not returned when the image is uploaded.
Relative path of the image file if uploaded directly. Not returned when the image is passed by URL.
Information about what might have caused a failure, such as an image that is too large. Not returned when there is no error.
HTTP status code.
Human-readable error description. For example,
File size limit exceeded
.Codified error string. For example,
limit_exceeded
.
error
The classifiers.
Name of the classifier.
ID of a classifier identified in the image.
Classes within the classifier.
Name of the class.
Class names are translated in the language defined by the Accept-Language request header for the build-in classifier IDs (
default
,food
, andexplicit
). Class names of custom classifiers are not translated. The response might not be in the specified language when the requested language is not supported or when there is no translation for the class name.Confidence score for the property in the range of 0 to 1. A higher score indicates greater likelihood that the class is depicted in the image. The default threshold for returning scores from a classifier is 0.5.
Constraints: 0 ≤ value ≤ 1
Knowledge graph of the property. For example,
/fruit/pome/apple/eating apple/Granny Smith
. Included only if identified.
classes
classifiers
images
Information about what might cause less than optimal output. For example, a request sent with a corrupt .zip file and a list of image URLs will still complete, but does not return the expected output. Not returned when there is no warning.
Codified warning string, such as
limit_reached
.Information about the error.
warnings
Results for all images.
Number of custom classes identified in the images.
Number of images processed for the API call.
Classified images.
Source of the image before any redirects. Not returned when the image is uploaded.
Fully resolved URL of the image after redirects are followed. Not returned when the image is uploaded.
Relative path of the image file if uploaded directly. Not returned when the image is passed by URL.
Information about what might have caused a failure, such as an image that is too large. Not returned when there is no error.
HTTP status code.
Human-readable error description. For example,
File size limit exceeded
.Codified error string. For example,
limit_exceeded
.
error
The classifiers.
Name of the classifier.
ID of a classifier identified in the image.
Classes within the classifier.
Name of the class.
Class names are translated in the language defined by the Accept-Language request header for the build-in classifier IDs (
default
,food
, andexplicit
). Class names of custom classifiers are not translated. The response might not be in the specified language when the requested language is not supported or when there is no translation for the class name.Confidence score for the property in the range of 0 to 1. A higher score indicates greater likelihood that the class is depicted in the image. The default threshold for returning scores from a classifier is 0.5.
Constraints: 0 ≤ value ≤ 1
Knowledge graph of the property. For example,
/fruit/pome/apple/eating apple/Granny Smith
. Included only if identified.
classes
classifiers
images
Information about what might cause less than optimal output. For example, a request sent with a corrupt .zip file and a list of image URLs will still complete, but does not return the expected output. Not returned when there is no warning.
Codified warning string, such as
limit_reached
.Information about the error.
warnings
Results for all images.
Number of custom classes identified in the images.
Number of images processed for the API call.
Classified images.
Source of the image before any redirects. Not returned when the image is uploaded.
Fully resolved URL of the image after redirects are followed. Not returned when the image is uploaded.
Relative path of the image file if uploaded directly. Not returned when the image is passed by URL.
Information about what might have caused a failure, such as an image that is too large. Not returned when there is no error.
HTTP status code.
Human-readable error description. For example,
File size limit exceeded
.Codified error string. For example,
limit_exceeded
.
error
The classifiers.
Name of the classifier.
ID of a classifier identified in the image.
Classes within the classifier.
Name of the class.
Class names are translated in the language defined by the Accept-Language request header for the build-in classifier IDs (
default
,food
, andexplicit
). Class names of custom classifiers are not translated. The response might not be in the specified language when the requested language is not supported or when there is no translation for the class name.Confidence score for the property in the range of 0 to 1. A higher score indicates greater likelihood that the class is depicted in the image. The default threshold for returning scores from a classifier is 0.5.
Constraints: 0 ≤ value ≤ 1
Knowledge graph of the property. For example,
/fruit/pome/apple/eating apple/Granny Smith
. Included only if identified.
classes
classifiers
images
Information about what might cause less than optimal output. For example, a request sent with a corrupt .zip file and a list of image URLs will still complete, but does not return the expected output. Not returned when there is no warning.
Codified warning string, such as
limit_reached
.Information about the error.
warnings
Results for all images.
Number of custom classes identified in the images.
Number of images processed for the API call.
Classified images.
Source of the image before any redirects. Not returned when the image is uploaded.
Fully resolved URL of the image after redirects are followed. Not returned when the image is uploaded.
Relative path of the image file if uploaded directly. Not returned when the image is passed by URL.
Information about what might have caused a failure, such as an image that is too large. Not returned when there is no error.
HTTP status code.
Human-readable error description. For example,
File size limit exceeded
.Codified error string. For example,
limit_exceeded
.
error
The classifiers.
Name of the classifier.
ID of a classifier identified in the image.
Classes within the classifier.
Name of the class.
Class names are translated in the language defined by the Accept-Language request header for the build-in classifier IDs (
default
,food
, andexplicit
). Class names of custom classifiers are not translated. The response might not be in the specified language when the requested language is not supported or when there is no translation for the class name.Confidence score for the property in the range of 0 to 1. A higher score indicates greater likelihood that the class is depicted in the image. The default threshold for returning scores from a classifier is 0.5.
Constraints: 0 ≤ value ≤ 1
Knowledge graph of the property. For example,
/fruit/pome/apple/eating apple/Granny Smith
. Included only if identified.
classes
classifiers
images
Information about what might cause less than optimal output. For example, a request sent with a corrupt .zip file and a list of image URLs will still complete, but does not return the expected output. Not returned when there is no warning.
Codified warning string, such as
limit_reached
.Information about the error.
warnings
Results for all images.
Number of custom classes identified in the images.
Number of images processed for the API call.
Classified images.
Source of the image before any redirects. Not returned when the image is uploaded.
Fully resolved URL of the image after redirects are followed. Not returned when the image is uploaded.
Relative path of the image file if uploaded directly. Not returned when the image is passed by URL.
Information about what might have caused a failure, such as an image that is too large. Not returned when there is no error.
HTTP status code.
Human-readable error description. For example,
File size limit exceeded
.Codified error string. For example,
limit_exceeded
.
error
The classifiers.
Name of the classifier.
ID of a classifier identified in the image.
Classes within the classifier.
Name of the class.
Class names are translated in the language defined by the Accept-Language request header for the build-in classifier IDs (
default
,food
, andexplicit
). Class names of custom classifiers are not translated. The response might not be in the specified language when the requested language is not supported or when there is no translation for the class name.Confidence score for the property in the range of 0 to 1. A higher score indicates greater likelihood that the class is depicted in the image. The default threshold for returning scores from a classifier is 0.5.
Constraints: 0 ≤ value ≤ 1
Knowledge graph of the property. For example,
/fruit/pome/apple/eating apple/Granny Smith
. Included only if identified.
classes
classifiers
images
Information about what might cause less than optimal output. For example, a request sent with a corrupt .zip file and a list of image URLs will still complete, but does not return the expected output. Not returned when there is no warning.
Codified warning string, such as
limit_reached
.Information about the error.
warnings
Results for all images.
Number of custom classes identified in the images.
Number of images processed for the API call.
Classified images.
Source of the image before any redirects. Not returned when the image is uploaded.
Fully resolved URL of the image after redirects are followed. Not returned when the image is uploaded.
Relative path of the image file if uploaded directly. Not returned when the image is passed by URL.
Information about what might have caused a failure, such as an image that is too large. Not returned when there is no error.
HTTP status code.
Human-readable error description. For example,
File size limit exceeded
.Codified error string. For example,
limit_exceeded
.
Error
The classifiers.
Name of the classifier.
ID of a classifier identified in the image.
Classes within the classifier.
Name of the class.
Class names are translated in the language defined by the Accept-Language request header for the build-in classifier IDs (
default
,food
, andexplicit
). Class names of custom classifiers are not translated. The response might not be in the specified language when the requested language is not supported or when there is no translation for the class name.Confidence score for the property in the range of 0 to 1. A higher score indicates greater likelihood that the class is depicted in the image. The default threshold for returning scores from a classifier is 0.5.
Constraints: 0 ≤ value ≤ 1
Knowledge graph of the property. For example,
/fruit/pome/apple/eating apple/Granny Smith
. Included only if identified.
Classes
Classifiers
Images
Information about what might cause less than optimal output. For example, a request sent with a corrupt .zip file and a list of image URLs will still complete, but does not return the expected output. Not returned when there is no warning.
Codified warning string, such as
limit_reached
.Information about the error.
Warnings
Results for all images.
Number of custom classes identified in the images.
Number of images processed for the API call.
Classified images.
Source of the image before any redirects. Not returned when the image is uploaded.
Fully resolved URL of the image after redirects are followed. Not returned when the image is uploaded.
Relative path of the image file if uploaded directly. Not returned when the image is passed by URL.
Information about what might have caused a failure, such as an image that is too large. Not returned when there is no error.
HTTP status code.
Human-readable error description. For example,
File size limit exceeded
.Codified error string. For example,
limit_exceeded
.
Error
The classifiers.
Name of the classifier.
ID of a classifier identified in the image.
Classes within the classifier.
Name of the class.
Class names are translated in the language defined by the Accept-Language request header for the build-in classifier IDs (
default
,food
, andexplicit
). Class names of custom classifiers are not translated. The response might not be in the specified language when the requested language is not supported or when there is no translation for the class name.Confidence score for the property in the range of 0 to 1. A higher score indicates greater likelihood that the class is depicted in the image. The default threshold for returning scores from a classifier is 0.5.
Constraints: 0 ≤ value ≤ 1
Knowledge graph of the property. For example,
/fruit/pome/apple/eating apple/Granny Smith
. Included only if identified.
Classes
Classifiers
Images
Information about what might cause less than optimal output. For example, a request sent with a corrupt .zip file and a list of image URLs will still complete, but does not return the expected output. Not returned when there is no warning.
Codified warning string, such as
limit_reached
.Information about the error.
Warnings
Status Code
success
Invalid request due to user input, for example:
- Bad JSON input
- Bad query parameter or header
- Invalid output language
- No input images
- The size of the image file in the request is larger than the maximum supported size
- Corrupt .zip file
No API key or the key is not valid.
The .zip file is too large.
{ "images": [ { "classifiers": [ { "classifier_id": "roundPlusBanana_1758279329", "name": "roundPlusBanana", "classes": [ { "class": "fruit", "score": 0.788 }, { "class": "olive color", "score": 0.973 }, { "class": "lemon yellow color", "score": 0.789 } ] } ], "image": "fruitbowl.jpg" } ], "images_processed": 1, "custom_classes": 6 }
{ "images": [ { "classifiers": [ { "classifier_id": "roundPlusBanana_1758279329", "name": "roundPlusBanana", "classes": [ { "class": "fruit", "score": 0.788 }, { "class": "olive color", "score": 0.973 }, { "class": "lemon yellow color", "score": 0.789 } ] } ], "image": "fruitbowl.jpg" } ], "images_processed": 1, "custom_classes": 6 }
{ "code": "400", "error": "Error: Too many images in collection" }
{ "code": "400", "error": "Error: Too many images in collection" }
{ "code": "401", "error": "Unauthorized" }
{ "code": "401", "error": "Unauthorized" }
Create a classifier
Train a new multi-faceted classifier on the uploaded image data. Create your custom classifier with positive or negative example training images. Include at least two sets of examples, either two positive example files or one positive and one negative file. You can upload a maximum of 256 MB per call.
Tips when creating:
-
If you set the X-Watson-Learning-Opt-Out header parameter to
true
when you create a classifier, the example training images are not stored. Save your training images locally. For more information, see Data collection. -
Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.
Train a new multi-faceted classifier on the uploaded image data. Create your custom classifier with positive or negative example training images. Include at least two sets of examples, either two positive example files or one positive and one negative file. You can upload a maximum of 256 MB per call.
Tips when creating:
-
If you set the X-Watson-Learning-Opt-Out header parameter to
true
when you create a classifier, the example training images are not stored. Save your training images locally. For more information, see Data collection. -
Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.
Train a new multi-faceted classifier on the uploaded image data. Create your custom classifier with positive or negative example training images. Include at least two sets of examples, either two positive example files or one positive and one negative file. You can upload a maximum of 256 MB per call.
Tips when creating:
-
If you set the X-Watson-Learning-Opt-Out header parameter to
true
when you create a classifier, the example training images are not stored. Save your training images locally. For more information, see Data collection. -
Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.
Train a new multi-faceted classifier on the uploaded image data. Create your custom classifier with positive or negative example training images. Include at least two sets of examples, either two positive example files or one positive and one negative file. You can upload a maximum of 256 MB per call.
Tips when creating:
-
If you set the X-Watson-Learning-Opt-Out header parameter to
true
when you create a classifier, the example training images are not stored. Save your training images locally. For more information, see Data collection. -
Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.
Train a new multi-faceted classifier on the uploaded image data. Create your custom classifier with positive or negative example training images. Include at least two sets of examples, either two positive example files or one positive and one negative file. You can upload a maximum of 256 MB per call.
Tips when creating:
-
If you set the X-Watson-Learning-Opt-Out header parameter to
true
when you create a classifier, the example training images are not stored. Save your training images locally. For more information, see Data collection. -
Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.
Train a new multi-faceted classifier on the uploaded image data. Create your custom classifier with positive or negative example training images. Include at least two sets of examples, either two positive example files or one positive and one negative file. You can upload a maximum of 256 MB per call.
Tips when creating:
-
If you set the X-Watson-Learning-Opt-Out header parameter to
true
when you create a classifier, the example training images are not stored. Save your training images locally. For more information, see Data collection. -
Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.
Train a new multi-faceted classifier on the uploaded image data. Create your custom classifier with positive or negative example training images. Include at least two sets of examples, either two positive example files or one positive and one negative file. You can upload a maximum of 256 MB per call.
Tips when creating:
-
If you set the X-Watson-Learning-Opt-Out header parameter to
true
when you create a classifier, the example training images are not stored. Save your training images locally. For more information, see Data collection. -
Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.
Train a new multi-faceted classifier on the uploaded image data. Create your custom classifier with positive or negative example training images. Include at least two sets of examples, either two positive example files or one positive and one negative file. You can upload a maximum of 256 MB per call.
Tips when creating:
-
If you set the X-Watson-Learning-Opt-Out header parameter to
true
when you create a classifier, the example training images are not stored. Save your training images locally. For more information, see Data collection. -
Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.
Train a new multi-faceted classifier on the uploaded image data. Create your custom classifier with positive or negative example training images. Include at least two sets of examples, either two positive example files or one positive and one negative file. You can upload a maximum of 256 MB per call.
Tips when creating:
-
If you set the X-Watson-Learning-Opt-Out header parameter to
true
when you create a classifier, the example training images are not stored. Save your training images locally. For more information, see Data collection. -
Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.
POST /v3/classifiers
(visualRecognition *VisualRecognitionV3) CreateClassifier(createClassifierOptions *CreateClassifierOptions) (result *Classifier, response *core.DetailedResponse, err error)
(visualRecognition *VisualRecognitionV3) CreateClassifierWithContext(ctx context.Context, createClassifierOptions *CreateClassifierOptions) (result *Classifier, response *core.DetailedResponse, err error)
ServiceCall<Classifier> createClassifier(CreateClassifierOptions createClassifierOptions)
createClassifier(params)
create_classifier(self,
name: str,
positive_examples: Dict[str, BinaryIO],
*,
negative_examples: BinaryIO = None,
negative_examples_filename: str = None,
**kwargs
) -> DetailedResponse
create_classifier(name:, positive_examples:, negative_examples: nil, negative_examples_filename: nil)
func createClassifier(
name: String,
positiveExamples: [String: Data],
negativeExamples: Data? = nil,
negativeExamplesFilename: String? = nil,
headers: [String: String]? = nil,
completionHandler: @escaping (WatsonResponse<Classifier>?, WatsonError?) -> Void)
CreateClassifier(string name, Dictionary<string, System.IO.MemoryStream> positiveExamples, System.IO.MemoryStream negativeExamples = null, string negativeExamplesFilename = null)
CreateClassifier(Callback<Classifier> callback, string name, Dictionary<string, System.IO.MemoryStream> positiveExamples, System.IO.MemoryStream negativeExamples = null, string negativeExamplesFilename = null)
Request
Instantiate the CreateClassifierOptions
struct and set the fields to provide parameter values for the CreateClassifier
method.
Use the CreateClassifierOptions.Builder
to create a CreateClassifierOptions
object that contains the parameter values for the createClassifier
method.
Query Parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.
Form Parameters
The name of the new classifier. Encode special characters in UTF-8.
A .zip file of images that depict the visual subject of a class in the new classifier. You can include more than one positive example file in a call.
Specify the parameter name by appending
_positive_examples
to the class name. For example,goldenretriever_positive_examples
creates the class goldenretriever. The string cannot contain the following characters:$ * - { } \ | / ' " ` [ ]
.Include at least 10 images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of images is 10,000 images or 100 MB per .zip file.
Encode special characters in the file name in UTF-8.
Constraints: Value must match regular expression
^(?![-"$'*/[\\]`{|}])[\p{L}\p{N}#%&()+,:;<=>?@^~]*$
A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must contain a minimum of 10 images.
Encode special characters in the file name in UTF-8.
WithContext method only
A context.Context instance that you can use to specify a timeout for the operation or to cancel an in-flight request.
The CreateClassifier options.
The name of the new classifier. Encode special characters in UTF-8.
A .zip file of images that depict the visual subject of a class in the new classifier. You can include more than one positive example file in a call.
Specify the parameter name by appending
_positive_examples
to the class name. For example,goldenretriever_positive_examples
creates the class goldenretriever. The string cannot contain the following characters:$ * - { } \ | / ' " ` [ ]
.Include at least 10 images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of images is 10,000 images or 100 MB per .zip file.
Encode special characters in the file name in UTF-8.
Constraints: Value must match regular expression
/^(?![-\"$'*\/[\\\\]`{|}])[\\p{L}\\p{N}#%&()+,:;<=>?@^~]*$/
A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must contain a minimum of 10 images.
Encode special characters in the file name in UTF-8.
The filename for negativeExamples.
The createClassifier options.
The name of the new classifier. Encode special characters in UTF-8.
A .zip file of images that depict the visual subject of a class in the new classifier. You can include more than one positive example file in a call.
Specify the parameter name by appending
_positive_examples
to the class name. For example,goldenretriever_positive_examples
creates the class goldenretriever. The string cannot contain the following characters:$ * - { } \ | / ' " ` [ ]
.Include at least 10 images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of images is 10,000 images or 100 MB per .zip file.
Encode special characters in the file name in UTF-8.
Constraints: Value must match regular expression
/^(?![-\"$'*\/[\\\\]`{|}])[\\p{L}\\p{N}#%&()+,:;<=>?@^~]*$/
A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must contain a minimum of 10 images.
Encode special characters in the file name in UTF-8.
The filename for negativeExamples.
parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.The name of the new classifier. Encode special characters in UTF-8.
A dictionary that contains the value for each classname. The value is a .zip file of images that depict the visual subject of a class in the new classifier. You can include more than one positive example file in a call.
Specify the parameter name by appending
_positive_examples
to the class name. For example,goldenretriever_positive_examples
creates the class goldenretriever. The string cannot contain the following characters:$ * - { } \ | / ' " ` [ ]
.Include at least 10 images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of images is 10,000 images or 100 MB per .zip file.
Encode special characters in the file name in UTF-8.
Constraints: Value must match regular expression
/^(?![-\"$'*\/[\\\\]`{|}])[\\p{L}\\p{N}#%&()+,:;<=>?@^~]*$/
A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must contain a minimum of 10 images.
Encode special characters in the file name in UTF-8.
The filename for negativeExamples.
parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.The name of the new classifier. Encode special characters in UTF-8.
A dictionary that contains the value for each classname. The value is a .zip file of images that depict the visual subject of a class in the new classifier. You can include more than one positive example file in a call.
Specify the parameter name by appending
_positive_examples
to the class name. For example,goldenretriever_positive_examples
creates the class goldenretriever. The string cannot contain the following characters:$ * - { } \ | / ' " ` [ ]
.Include at least 10 images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of images is 10,000 images or 100 MB per .zip file.
Encode special characters in the file name in UTF-8.
Constraints: Value must match regular expression
/^(?![-\"$'*\/[\\\\]`{|}])[\\p{L}\\p{N}#%&()+,:;<=>?@^~]*$/
A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must contain a minimum of 10 images.
Encode special characters in the file name in UTF-8.
The filename for negative_examples.
parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.The name of the new classifier. Encode special characters in UTF-8.
A .zip file of images that depict the visual subject of a class in the new classifier. You can include more than one positive example file in a call.
Specify the parameter name by appending
_positive_examples
to the class name. For example,goldenretriever_positive_examples
creates the class goldenretriever. The string cannot contain the following characters:$ * - { } \ | / ' " ` [ ]
.Include at least 10 images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of images is 10,000 images or 100 MB per .zip file.
Encode special characters in the file name in UTF-8.
Constraints: Value must match regular expression
/^(?![-\"$'*\/[\\\\]`{|}])[\\p{L}\\p{N}#%&()+,:;<=>?@^~]*$/
A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must contain a minimum of 10 images.
Encode special characters in the file name in UTF-8.
The filename for negative_examples.
parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.The name of the new classifier. Encode special characters in UTF-8.
A dictionary that contains the value for each classname. The value is a .zip file of images that depict the visual subject of a class in the new classifier. You can include more than one positive example file in a call.
Specify the parameter name by appending
_positive_examples
to the class name. For example,goldenretriever_positive_examples
creates the class goldenretriever. The string cannot contain the following characters:$ * - { } \ | / ' " ` [ ]
.Include at least 10 images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of images is 10,000 images or 100 MB per .zip file.
Encode special characters in the file name in UTF-8.
Constraints: Value must match regular expression
/^(?![-\"$'*\/[\\\\]`{|}])[\\p{L}\\p{N}#%&()+,:;<=>?@^~]*$/
A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must contain a minimum of 10 images.
Encode special characters in the file name in UTF-8.
The filename for negativeExamples.
parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.The name of the new classifier. Encode special characters in UTF-8.
A dictionary that contains the value for each classname. The value is a .zip file of images that depict the visual subject of a class in the new classifier. You can include more than one positive example file in a call.
Specify the parameter name by appending
_positive_examples
to the class name. For example,goldenretriever_positive_examples
creates the class goldenretriever. The string cannot contain the following characters:$ * - { } \ | / ' " ` [ ]
.Include at least 10 images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of images is 10,000 images or 100 MB per .zip file.
Encode special characters in the file name in UTF-8.
Constraints: Value must match regular expression
/^(?![-\"$'*\/[\\\\]`{|}])[\\p{L}\\p{N}#%&()+,:;<=>?@^~]*$/
A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must contain a minimum of 10 images.
Encode special characters in the file name in UTF-8.
The filename for negativeExamples.
parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.The name of the new classifier. Encode special characters in UTF-8.
A dictionary that contains the value for each classname. The value is a .zip file of images that depict the visual subject of a class in the new classifier. You can include more than one positive example file in a call.
Specify the parameter name by appending
_positive_examples
to the class name. For example,goldenretriever_positive_examples
creates the class goldenretriever. The string cannot contain the following characters:$ * - { } \ | / ' " ` [ ]
.Include at least 10 images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of images is 10,000 images or 100 MB per .zip file.
Encode special characters in the file name in UTF-8.
Constraints: Value must match regular expression
/^(?![-\"$'*\/[\\\\]`{|}])[\\p{L}\\p{N}#%&()+,:;<=>?@^~]*$/
A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must contain a minimum of 10 images.
Encode special characters in the file name in UTF-8.
The filename for negativeExamples.
curl -X POST -u "apikey:{apikey}" -F "beagle_positive_examples=@beagle.zip" -F "goldenretriever_positive_examples=@golden-retriever.zip" -F "husky_positive_examples=@husky.zip" -F "negative_examples=@cats.zip" -F "name=dogs" "{url}/v3/classifiers?version=2018-03-19"
Download positive examples files beagle.zip, golden-retriever.zip, husky.zip
Download negative examples file cats.zip
IamAuthenticator authenticator = new IamAuthenticator( apikey: "{apikey}" ); VisualRecognitionService visualRecognition = new VisualRecognitionService("2018-03-19", authenticator); visualRecognition.SetServiceUrl("{url}"); DetailedResponse<Classifier> result = null; using (FileStream beagle = File.OpenRead("./beagle.zip"), goldenRetriever = File.OpenRead("./golden-retriever.zip"), husky = File.OpenRead("./husky.zip"), cats = File.OpenRead("./cats.zip")) { using (MemoryStream beagleMemoryStream = new MemoryStream(), goldenRetrieverMemoryStream = new MemoryStream(), huskyMemoryStream = new MemoryStream(), catsMemoryStream = new MemoryStream()) { beagle.CopyTo(beagleMemoryStream); goldenRetriever.CopyTo(goldenRetrieverMemoryStream); husky.CopyTo(huskyMemoryStream); cats.CopyTo(catsMemoryStream); Dictionary<string, MemoryStream> positiveExamples = new Dictionary<string, MemoryStream>(); positiveExamples.Add("beagle_positive_examples", beagleMemoryStream); positiveExamples.Add("goldenretriever_positive_examples", goldenRetrieverMemoryStream); positiveExamples.Add("husky_positive_examples", huskyMemoryStream); result = service.CreateClassifier( name: "dogs", positiveExamples: positiveExamples, negativeExamples: catsMemoryStream ); } } Console.WriteLine(result.Response);
package main import ( "encoding/json" "fmt" "io" "os" "github.com/IBM/go-sdk-core/core" "github.com/watson-developer-cloud/go-sdk/visualrecognitionv3" ) func main() { authenticator := &core.IamAuthenticator{ ApiKey: "{apikey}", } options := &visualrecognitionv3.VisualRecognitionV3Options{ Version: "2018-03-19", Authenticator: authenticator, } visualRecognition, visualRecognitionErr := visualrecognitionv3.NewVisualRecognitionV3(options) if visualRecognitionErr != nil { panic(visualRecognitionErr) } visualRecognition.SetServiceURL("{url}") beagle, beagleErr := os.Open("./beagle.zip") if beagleErr != nil { panic(beagleErr) } defer beagle.Close() goldenRetriever, goldenRetrieverErr := os.Open("./golden-retriever.zip") if goldenRetrieverErr != nil { panic(goldenRetrieverErr) } defer goldenRetriever.Close() husky, huskyErr := os.Open("./husky.zip") if huskyErr != nil { panic(huskyErr) } defer husky.Close() cats, catsErr := os.Open( "./cats.zip") if catsErr != nil { panic(catsErr) } defer cats.Close() positiveExamples := make(map[string]io.ReadCloser) positiveExamples["beagle"] = beagle positiveExamples["goldenretriever"] = goldenRetriever positiveExamples["husky"] = husky result, response, responseErr := visualRecognition.CreateClassifier( &visualrecognitionv3.CreateClassifierOptions{ Name: core.StringPtr("dogs"), PositiveExamples: positiveExamples, NegativeExamples: cats, }, ) if responseErr != nil { panic(responseErr) } b, _ := json.MarshalIndent(result, "", " ") fmt.Println(string(b)) }
IamAuthenticator authenticator = new IamAuthenticator("{apikey}"); VisualRecognition visualRecognition = new VisualRecognition("2018-03-19", authenticator); visualRecognition.setServiceUrl("{url}"); CreateClassifierOptions createClassifierOptions = new CreateClassifierOptions.Builder() .name("dogs") .addPositiveExamples("beagle", new FileInputStream("./beagle.zip")) .addPositiveExamples("goldenretriever",new FileInputStream("./golden-retriever.zip")) .addPositiveExamples("husky", new FileInputStream("./husky.zip")) .negativeExamples(new FileInputStream("./cats.zip")) .negativeExamplesFilename("cats") .build(); Classifier dogs = visualRecognition.createClassifier(createClassifierOptions).execute().getResult(); System.out.println(dogs);
const fs = require('fs'); const VisualRecognitionV3 = require('ibm-watson/visual-recognition/v3'); const { IamAuthenticator } = require('ibm-watson/auth'); const visualRecognition = new VisualRecognitionV3({ version: '2018-03-19', authenticator: new IamAuthenticator({ apikey: '{apikey}', }), serviceUrl: '{url}', }); const createClassifierParams = { name: 'dogs', negativeExamples: fs.createReadStream('./cats.zip'), positiveExamples: { beagle: fs.createReadStream('./beagle.zip'), husky: fs.createReadStream('./husky.zip'), goldenretriever: fs.createReadStream('./golden-retriever.zip'), } }; visualRecognition.createClassifier(createClassifierParams) .then(response => { const classifier = response.result; console.log(JSON.stringify(classifier, null, 2)); }) .catch(err => { console.log('error:', err); });
import json from ibm_watson import VisualRecognitionV3 from ibm_cloud_sdk_core.authenticators import IAMAuthenticator authenticator = IAMAuthenticator('{apikey}') visual_recognition = VisualRecognitionV3( version='2018-03-19', authenticator=authenticator ) visual_recognition.set_service_url('{url}') with open('./beagle.zip', 'rb') as beagle, open( './golden-retriever.zip', 'rb') as goldenretriever, open( './husky.zip', 'rb') as husky, open( './cats.zip', 'rb') as cats: model = visual_recognition.create_classifier( 'dogs', positive_examples={'beagle': beagle, 'goldenretriever': goldenretriever, 'husky': husky}, negative_examples=cats).get_result() print(json.dumps(model, indent=2))
require "json" require "ibm_watson/authenticators" require "ibm_watson/visual_recognition_v3" include IBMWatson authenticator = Authenticators::IamAuthenticator.new( apikey: "{apikey}" ) visual_recognition = VisualRecognitionV3.new( version: "2018-03-19", authenticator: authenticator ) visual_recognition.service_url = "{url}" beagle = File.open("./beagle.zip") goldenretriever = File.open("./golden-retriever.zip") husky = File.open("./husky.zip") cats = File.open("./cats.zip") model = visual_recognition.create_classifier( name: "dogs", positive_examples: { beagle: beagle, goldenretriever: goldenretriever, husky: husky }, negative_examples: { cats: cats } ) beagle.close goldenretriever.close husky.close cats.close puts JSON.pretty_generate(model.result)
let authenticator = WatsonIAMAuthenticator(apiKey: "{apikey}") let visualRecognition = VisualRecognition(version: "2018-03-19", authenticator: authenticator) visualRecognition.serviceURL = "{url}" let beagleURL = Bundle.main.url(forResource: "beagle", withExtension: "zip") let beagle = try! Data(contentsOf: beagleURL!) let goldenRetrieverURL = Bundle.main.url(forResource: "golden-retriever", withExtension: "zip") let goldenRetriever = try! Data(contentsOf: goldenRetrieverURL!) let huskyURL = Bundle.main.url(forResource: "husky", withExtension: "zip") let husky = try! Data(contentsOf: huskyURL!) let catsURL = Bundle.main.url(forResource: "cats", withExtension: "zip") let cats = try! Data(contentsOf: catsURL!) visualRecognition.createClassifier( name: "dogs", positiveExamples: [ "beagle": beagle, "goldenretriever": goldenRetriever, "husky": husky ], negativeExamples: cats) { response, error in guard let classifier = response?.result else { print(error?.localizedDescription ?? "unknown error") return } print(classifier) }
var authenticator = new IamAuthenticator( apikey: "{apikey}" ); while (!authenticator.CanAuthenticate()) yield return null; var visualRecognition = new VisualRecognitionService("2018-03-19", authenticator); visualRecognition.SetServiceUrl("{url}"); Classifier createClassifierResponse = null; using (FileStream beagle = File.OpenRead("./beagle.zip"), goldenRetriever = File.OpenRead("./golden-retriever.zip"), husky = File.OpenRead("./husky.zip"), cats = File.OpenRead("./cats.zip")) { using (MemoryStream beagleMemoryStream = new MemoryStream(), goldenRetrieverMemoryStream = new MemoryStream(), huskyMemoryStream = new MemoryStream(), catsMemoryStream = new MemoryStream()) { beagle.CopyTo(beagleMemoryStream); goldenRetriever.CopyTo(goldenRetrieverMemoryStream); husky.CopyTo(huskyMemoryStream); cats.CopyTo(catsMemoryStream); Dictionary<string, MemoryStream> positiveExamples = new Dictionary<string, MemoryStream>(); positiveExamples.Add("beagle_positive_examples", beagleMemoryStream); positiveExamples.Add("goldenretriever_positive_examples", goldenRetrieverMemoryStream); positiveExamples.Add("husky_positive_examples", huskyMemoryStream); VisualRecognitionService.CreateClassifier( callback: (DetailedResponse<Classifier> response, IBMError error) => { Log.Debug("VisualRecognitionServiceV3", "CreateClassifier result: {0}", response.Response); createClassifierResponse = response.Result; classifierId = createClassifierResponse.ClassifierId; }, name: "dogs", positiveExamples: positiveExamples, negativeExamples: catsMemoryStream ); } } while (createClassifierResponse == null) yield return null;
Response
Information about a classifier.
ID of a classifier identified in the image.
Name of the classifier.
Unique ID of the account who owns the classifier. Might not be returned by some requests.
Training status of classifier.
Possible values: [
ready
,training
,retraining
,failed
]Whether the classifier can be downloaded as a Core ML model after the training status is
ready
.If classifier training has failed, this field might explain why.
Date and time in Coordinated Universal Time (UTC) that the classifier was created.
Classes that define a classifier.
Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Might not be returned by some requests. Identical to
updated
and retained for backward compatibility.Date and time in Coordinated Universal Time (UTC) that the classifier was most recently updated. The field matches either
retrained
orcreated
. Might not be returned by some requests.
Information about a classifier.
ID of a classifier identified in the image.
Name of the classifier.
Unique ID of the account who owns the classifier. Might not be returned by some requests.
Training status of classifier.
Possible values: [
ready
,training
,retraining
,failed
]Whether the classifier can be downloaded as a Core ML model after the training status is
ready
.If classifier training has failed, this field might explain why.
Date and time in Coordinated Universal Time (UTC) that the classifier was created.
Classes that define a classifier.
The name of the class.
Classes
Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Might not be returned by some requests. Identical to
updated
and retained for backward compatibility.Date and time in Coordinated Universal Time (UTC) that the classifier was most recently updated. The field matches either
retrained
orcreated
. Might not be returned by some requests.
Information about a classifier.
ID of a classifier identified in the image.
Name of the classifier.
Unique ID of the account who owns the classifier. Might not be returned by some requests.
Training status of classifier.
Possible values: [
ready
,training
,retraining
,failed
]Whether the classifier can be downloaded as a Core ML model after the training status is
ready
.If classifier training has failed, this field might explain why.
Date and time in Coordinated Universal Time (UTC) that the classifier was created.
Classes that define a classifier.
The name of the class.
classes
Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Might not be returned by some requests. Identical to
updated
and retained for backward compatibility.Date and time in Coordinated Universal Time (UTC) that the classifier was most recently updated. The field matches either
retrained
orcreated
. Might not be returned by some requests.
Information about a classifier.
ID of a classifier identified in the image.
Name of the classifier.
Unique ID of the account who owns the classifier. Might not be returned by some requests.
Training status of classifier.
Possible values: [
ready
,training
,retraining
,failed
]Whether the classifier can be downloaded as a Core ML model after the training status is
ready
.If classifier training has failed, this field might explain why.
Date and time in Coordinated Universal Time (UTC) that the classifier was created.
Classes that define a classifier.
The name of the class.
classes
Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Might not be returned by some requests. Identical to
updated
and retained for backward compatibility.Date and time in Coordinated Universal Time (UTC) that the classifier was most recently updated. The field matches either
retrained
orcreated
. Might not be returned by some requests.
Information about a classifier.
ID of a classifier identified in the image.
Name of the classifier.
Unique ID of the account who owns the classifier. Might not be returned by some requests.
Training status of classifier.
Possible values: [
ready
,training
,retraining
,failed
]Whether the classifier can be downloaded as a Core ML model after the training status is
ready
.If classifier training has failed, this field might explain why.
Date and time in Coordinated Universal Time (UTC) that the classifier was created.
Classes that define a classifier.
The name of the class.
classes
Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Might not be returned by some requests. Identical to
updated
and retained for backward compatibility.Date and time in Coordinated Universal Time (UTC) that the classifier was most recently updated. The field matches either
retrained
orcreated
. Might not be returned by some requests.
Information about a classifier.
ID of a classifier identified in the image.
Name of the classifier.
Unique ID of the account who owns the classifier. Might not be returned by some requests.
Training status of classifier.
Possible values: [
ready
,training
,retraining
,failed
]Whether the classifier can be downloaded as a Core ML model after the training status is
ready
.If classifier training has failed, this field might explain why.
Date and time in Coordinated Universal Time (UTC) that the classifier was created.
Classes that define a classifier.
The name of the class.
classes
Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Might not be returned by some requests. Identical to
updated
and retained for backward compatibility.Date and time in Coordinated Universal Time (UTC) that the classifier was most recently updated. The field matches either
retrained
orcreated
. Might not be returned by some requests.
Information about a classifier.
ID of a classifier identified in the image.
Name of the classifier.
Unique ID of the account who owns the classifier. Might not be returned by some requests.
Training status of classifier.
Possible values: [
ready
,training
,retraining
,failed
]Whether the classifier can be downloaded as a Core ML model after the training status is
ready
.If classifier training has failed, this field might explain why.
Date and time in Coordinated Universal Time (UTC) that the classifier was created.
Classes that define a classifier.
The name of the class.
classes
Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Might not be returned by some requests. Identical to
updated
and retained for backward compatibility.Date and time in Coordinated Universal Time (UTC) that the classifier was most recently updated. The field matches either
retrained
orcreated
. Might not be returned by some requests.
Information about a classifier.
ID of a classifier identified in the image.
Name of the classifier.
Unique ID of the account who owns the classifier. Might not be returned by some requests.
Training status of classifier.
Possible values: [
ready
,training
,retraining
,failed
]Whether the classifier can be downloaded as a Core ML model after the training status is
ready
.If classifier training has failed, this field might explain why.
Date and time in Coordinated Universal Time (UTC) that the classifier was created.
Classes that define a classifier.
The name of the class.
Classes
Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Might not be returned by some requests. Identical to
updated
and retained for backward compatibility.Date and time in Coordinated Universal Time (UTC) that the classifier was most recently updated. The field matches either
retrained
orcreated
. Might not be returned by some requests.
Information about a classifier.
ID of a classifier identified in the image.
Name of the classifier.
Unique ID of the account who owns the classifier. Might not be returned by some requests.
Training status of classifier.
Possible values: [
ready
,training
,retraining
,failed
]Whether the classifier can be downloaded as a Core ML model after the training status is
ready
.If classifier training has failed, this field might explain why.
Date and time in Coordinated Universal Time (UTC) that the classifier was created.
Classes that define a classifier.
The name of the class.
Classes
Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Might not be returned by some requests. Identical to
updated
and retained for backward compatibility.Date and time in Coordinated Universal Time (UTC) that the classifier was most recently updated. The field matches either
retrained
orcreated
. Might not be returned by some requests.
Status Code
success
Invalid request due to user input, for example:
- Bad query parameter or header
- No input images
- The size of the image file in the request is larger than the maximum supported size
- Corrupt .zip file
- Cannot find the classifier
No API key or the key is not valid.
The .zip file is too large.
{ "classifier_id": "dogs_1477088859", "name": "dogs", "status": "training", "owner": "b2a3c43c-f1ef-4186-a3d3-71073e4142c5", "created": "2018-03-17T19:01:30.536Z", "updated": "2018-03-17T19:01:30.536Z", "classes": [ { "class": "husky" }, { "class": "goldenretriever" }, { "class": "beagle" } ], "core_ml_enabled": true }
{ "classifier_id": "dogs_1477088859", "name": "dogs", "status": "training", "owner": "b2a3c43c-f1ef-4186-a3d3-71073e4142c5", "created": "2018-03-17T19:01:30.536Z", "updated": "2018-03-17T19:01:30.536Z", "classes": [ { "class": "husky" }, { "class": "goldenretriever" }, { "class": "beagle" } ], "core_ml_enabled": true }
{ "code": "400", "error": "Error: Too many images in collection" }
{ "code": "400", "error": "Error: Too many images in collection" }
{ "code": "401", "error": "Unauthorized" }
{ "code": "401", "error": "Unauthorized" }
Retrieve a list of classifiers
GET /v3/classifiers
(visualRecognition *VisualRecognitionV3) ListClassifiers(listClassifiersOptions *ListClassifiersOptions) (result *Classifiers, response *core.DetailedResponse, err error)
(visualRecognition *VisualRecognitionV3) ListClassifiersWithContext(ctx context.Context, listClassifiersOptions *ListClassifiersOptions) (result *Classifiers, response *core.DetailedResponse, err error)
ServiceCall<Classifiers> listClassifiers(ListClassifiersOptions listClassifiersOptions)
listClassifiers(params)
list_classifiers(self,
*,
verbose: bool = None,
**kwargs
) -> DetailedResponse
list_classifiers(verbose: nil)
func listClassifiers(
verbose: Bool? = nil,
headers: [String: String]? = nil,
completionHandler: @escaping (WatsonResponse<Classifiers>?, WatsonError?) -> Void)
ListClassifiers(bool? verbose = null)
ListClassifiers(Callback<Classifiers> callback, bool? verbose = null)
Request
Instantiate the ListClassifiersOptions
struct and set the fields to provide parameter values for the ListClassifiers
method.
Use the ListClassifiersOptions.Builder
to create a ListClassifiersOptions
object that contains the parameter values for the listClassifiers
method.
Query Parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.Specify
true
to return details about the classifiers. Omit this parameter to return a brief list of classifiers.
WithContext method only
A context.Context instance that you can use to specify a timeout for the operation or to cancel an in-flight request.
The ListClassifiers options.
Specify
true
to return details about the classifiers. Omit this parameter to return a brief list of classifiers.
The listClassifiers options.
Specify
true
to return details about the classifiers. Omit this parameter to return a brief list of classifiers.
parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.Specify
true
to return details about the classifiers. Omit this parameter to return a brief list of classifiers.
parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.Specify
true
to return details about the classifiers. Omit this parameter to return a brief list of classifiers.
parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.Specify
true
to return details about the classifiers. Omit this parameter to return a brief list of classifiers.
parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.Specify
true
to return details about the classifiers. Omit this parameter to return a brief list of classifiers.
parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.Specify
true
to return details about the classifiers. Omit this parameter to return a brief list of classifiers.
parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.Specify
true
to return details about the classifiers. Omit this parameter to return a brief list of classifiers.
curl -u "apikey:{apikey}" "{url}/v3/classifiers?verbose=true&version=2018-03-19"
IamAuthenticator authenticator = new IamAuthenticator( apikey: "{apikey}" ); VisualRecognitionService visualRecognition = new VisualRecognitionService("2018-03-19", authenticator); visualRecognition.SetServiceUrl("{url}"); var result = service.ListClassifiers( verbose: true ); Console.WriteLine(result.Response);
package main import ( "encoding/json" "fmt" "github.com/IBM/go-sdk-core/core" "github.com/watson-developer-cloud/go-sdk/visualrecognitionv3" ) func main() { authenticator := &core.IamAuthenticator{ ApiKey: "{apikey}", } options := &visualrecognitionv3.VisualRecognitionV3Options{ Version: "2018-03-19", Authenticator: authenticator, } visualRecognition, visualRecognitionErr := visualrecognitionv3.NewVisualRecognitionV3(options) if visualRecognitionErr != nil { panic(visualRecognitionErr) } visualRecognition.SetServiceURL("{url}") result, response, responseErr := visualRecognition.ListClassifiers( &visualrecognitionv3.ListClassifiersOptions{ Verbose: core.BoolPtr(true), }, ) if responseErr != nil { panic(responseErr) } b, _ := json.MarshalIndent(result, "", " ") fmt.Println(string(b)) }
IamAuthenticator authenticator = new IamAuthenticator("{apikey}"); VisualRecognition visualRecognition = new VisualRecognition("2018-03-19", authenticator); visualRecognition.setServiceUrl("{url}"); ListClassifiersOptions listClassifiersOptions = new ListClassifiersOptions.Builder() .verbose(true) .build(); Classifiers classifiers = visualRecognition.listClassifiers(listClassifiersOptions).execute().getResult(); System.out.println(classifiers);
const VisualRecognitionV3 = require('ibm-watson/visual-recognition/v3'); const { IamAuthenticator } = require('ibm-watson/auth'); const visualRecognition = new VisualRecognitionV3({ version: '2018-03-19', authenticator: new IamAuthenticator({ apikey: '{apikey}', }), serviceUrl: '{url}', }); const listClassifiersParams = { verbose: true, }; visualRecognition.listClassifiers(listClassifiersParams) .then(response => { const classifiers = response.result; console.log(JSON.stringify(classifiers, null, 2)); }) .catch(err => { console.log('error:', err); });
import json from ibm_watson import VisualRecognitionV3 from ibm_cloud_sdk_core.authenticators import IAMAuthenticator authenticator = IAMAuthenticator('{apikey}') visual_recognition = VisualRecognitionV3( version='2018-03-19', authenticator=authenticator ) visual_recognition.set_service_url('{url}') classifiers = visual_recognition.list_classifiers(verbose=True).get_result() print(json.dumps(classifiers, indent=2))
require "json" require "ibm_watson/authenticators" require "ibm_watson/visual_recognition_v3" include IBMWatson authenticator = Authenticators::IamAuthenticator.new( apikey: "{apikey}" ) visual_recognition = VisualRecognitionV3.new( version: "2018-03-19", authenticator: authenticator ) visual_recognition.service_url = "{url}" classifiers = visual_recognition.list_classifiers( verbose: true ) puts JSON.pretty_generate(classifiers.result)
let authenticator = WatsonIAMAuthenticator(apiKey: "{apikey}") let visualRecognition = VisualRecognition(version: "2018-03-19", authenticator: authenticator) visualRecognition.serviceURL = "{url}" visualRecognition.listClassifiers(verbose: true) { response, error in guard let classifiers = response?.result else { print(error?.localizedDescription ?? "unknown error") return } print(classifiers) }
var authenticator = new IamAuthenticator( apikey: "{apikey}" ); while (!authenticator.CanAuthenticate()) yield return null; var visualRecognition = new VisualRecognitionService("2018-03-19", authenticator); visualRecognition.SetServiceUrl("{url}"); Classifiers listClassifiersResponse = null; VisualRecognitionService.ListClassifiers( callback: (DetailedResponse<Classifiers> response, IBMError error) => { Log.Debug("VisualRecognitionServiceV3", "ListClassifiers result: {0}", response.Response); listClassifiersResponse = response.Result; }, verbose: true ); while (listClassifiersResponse == null) yield return null;
Response
A container for the list of classifiers.
List of classifiers.
A container for the list of classifiers.
List of classifiers.
ID of a classifier identified in the image.
Name of the classifier.
Unique ID of the account who owns the classifier. Might not be returned by some requests.
Training status of classifier.
Possible values: [
ready
,training
,retraining
,failed
]Whether the classifier can be downloaded as a Core ML model after the training status is
ready
.If classifier training has failed, this field might explain why.
Date and time in Coordinated Universal Time (UTC) that the classifier was created.
Classes that define a classifier.
The name of the class.
Classes
Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Might not be returned by some requests. Identical to
updated
and retained for backward compatibility.Date and time in Coordinated Universal Time (UTC) that the classifier was most recently updated. The field matches either
retrained
orcreated
. Might not be returned by some requests.
Classifiers
A container for the list of classifiers.
List of classifiers.
ID of a classifier identified in the image.
Name of the classifier.
Unique ID of the account who owns the classifier. Might not be returned by some requests.
Training status of classifier.
Possible values: [
ready
,training
,retraining
,failed
]Whether the classifier can be downloaded as a Core ML model after the training status is
ready
.If classifier training has failed, this field might explain why.
Date and time in Coordinated Universal Time (UTC) that the classifier was created.
Classes that define a classifier.
The name of the class.
classes
Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Might not be returned by some requests. Identical to
updated
and retained for backward compatibility.Date and time in Coordinated Universal Time (UTC) that the classifier was most recently updated. The field matches either
retrained
orcreated
. Might not be returned by some requests.
classifiers
A container for the list of classifiers.
List of classifiers.
ID of a classifier identified in the image.
Name of the classifier.
Unique ID of the account who owns the classifier. Might not be returned by some requests.
Training status of classifier.
Possible values: [
ready
,training
,retraining
,failed
]Whether the classifier can be downloaded as a Core ML model after the training status is
ready
.If classifier training has failed, this field might explain why.
Date and time in Coordinated Universal Time (UTC) that the classifier was created.
Classes that define a classifier.
The name of the class.
classes
Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Might not be returned by some requests. Identical to
updated
and retained for backward compatibility.Date and time in Coordinated Universal Time (UTC) that the classifier was most recently updated. The field matches either
retrained
orcreated
. Might not be returned by some requests.
classifiers
A container for the list of classifiers.
List of classifiers.
ID of a classifier identified in the image.
Name of the classifier.
Unique ID of the account who owns the classifier. Might not be returned by some requests.
Training status of classifier.
Possible values: [
ready
,training
,retraining
,failed
]Whether the classifier can be downloaded as a Core ML model after the training status is
ready
.If classifier training has failed, this field might explain why.
Date and time in Coordinated Universal Time (UTC) that the classifier was created.
Classes that define a classifier.
The name of the class.
classes
Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Might not be returned by some requests. Identical to
updated
and retained for backward compatibility.Date and time in Coordinated Universal Time (UTC) that the classifier was most recently updated. The field matches either
retrained
orcreated
. Might not be returned by some requests.
classifiers
A container for the list of classifiers.
List of classifiers.
ID of a classifier identified in the image.
Name of the classifier.
Unique ID of the account who owns the classifier. Might not be returned by some requests.
Training status of classifier.
Possible values: [
ready
,training
,retraining
,failed
]Whether the classifier can be downloaded as a Core ML model after the training status is
ready
.If classifier training has failed, this field might explain why.
Date and time in Coordinated Universal Time (UTC) that the classifier was created.
Classes that define a classifier.
The name of the class.
classes
Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Might not be returned by some requests. Identical to
updated
and retained for backward compatibility.Date and time in Coordinated Universal Time (UTC) that the classifier was most recently updated. The field matches either
retrained
orcreated
. Might not be returned by some requests.
classifiers
A container for the list of classifiers.
List of classifiers.
ID of a classifier identified in the image.
Name of the classifier.
Unique ID of the account who owns the classifier. Might not be returned by some requests.
Training status of classifier.
Possible values: [
ready
,training
,retraining
,failed
]Whether the classifier can be downloaded as a Core ML model after the training status is
ready
.If classifier training has failed, this field might explain why.
Date and time in Coordinated Universal Time (UTC) that the classifier was created.
Classes that define a classifier.
The name of the class.
classes
Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Might not be returned by some requests. Identical to
updated
and retained for backward compatibility.Date and time in Coordinated Universal Time (UTC) that the classifier was most recently updated. The field matches either
retrained
orcreated
. Might not be returned by some requests.
classifiers
A container for the list of classifiers.
List of classifiers.
ID of a classifier identified in the image.
Name of the classifier.
Unique ID of the account who owns the classifier. Might not be returned by some requests.
Training status of classifier.
Possible values: [
ready
,training
,retraining
,failed
]Whether the classifier can be downloaded as a Core ML model after the training status is
ready
.If classifier training has failed, this field might explain why.
Date and time in Coordinated Universal Time (UTC) that the classifier was created.
Classes that define a classifier.
The name of the class.
Classes
Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Might not be returned by some requests. Identical to
updated
and retained for backward compatibility.Date and time in Coordinated Universal Time (UTC) that the classifier was most recently updated. The field matches either
retrained
orcreated
. Might not be returned by some requests.
_Classifiers
A container for the list of classifiers.
List of classifiers.
ID of a classifier identified in the image.
Name of the classifier.
Unique ID of the account who owns the classifier. Might not be returned by some requests.
Training status of classifier.
Possible values: [
ready
,training
,retraining
,failed
]Whether the classifier can be downloaded as a Core ML model after the training status is
ready
.If classifier training has failed, this field might explain why.
Date and time in Coordinated Universal Time (UTC) that the classifier was created.
Classes that define a classifier.
The name of the class.
Classes
Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Might not be returned by some requests. Identical to
updated
and retained for backward compatibility.Date and time in Coordinated Universal Time (UTC) that the classifier was most recently updated. The field matches either
retrained
orcreated
. Might not be returned by some requests.
_Classifiers
Status Code
success
Invalid request due to user input, such as a bad parameter.
No API key or the key is not valid.
{ "classifiers": [ { "classifier_id": "dogs_1477088859", "name": "dogs", "status": "ready", "owner": "99d0114d-9959-4071-b06f-654701909be4", "created": "2018-03-17T19:01:30.536Z", "updated": "2018-03-17T19:42:19.906Z", "classes": [ { "class": "husky" }, { "class": "goldenretriever" }, { "class": "beagle" } ], "core_ml_enabled": true }, { "classifier_id": "CarsvsTrucks_1479118188", "name": "Cars vs Trucks", "status": "ready", "owner": "99d0114d-9959-4071-b06f-654701909be4", "created": "2016-07-19T15:24:08.743Z", "updated": "2016-07-19T15:24:08.743Z", "classes": [ { "class": "cars" } ], "core_ml_enabled": false } ] }
{ "classifiers": [ { "classifier_id": "dogs_1477088859", "name": "dogs", "status": "ready", "owner": "99d0114d-9959-4071-b06f-654701909be4", "created": "2018-03-17T19:01:30.536Z", "updated": "2018-03-17T19:42:19.906Z", "classes": [ { "class": "husky" }, { "class": "goldenretriever" }, { "class": "beagle" } ], "core_ml_enabled": true }, { "classifier_id": "CarsvsTrucks_1479118188", "name": "Cars vs Trucks", "status": "ready", "owner": "99d0114d-9959-4071-b06f-654701909be4", "created": "2016-07-19T15:24:08.743Z", "updated": "2016-07-19T15:24:08.743Z", "classes": [ { "class": "cars" } ], "core_ml_enabled": false } ] }
{ "code": "400", "error": "Error: Too many images in collection" }
{ "code": "400", "error": "Error: Too many images in collection" }
{ "code": "401", "error": "Unauthorized" }
{ "code": "401", "error": "Unauthorized" }
Retrieve classifier details
Retrieve information about a custom classifier.
Retrieve information about a custom classifier.
Retrieve information about a custom classifier.
Retrieve information about a custom classifier.
Retrieve information about a custom classifier.
Retrieve information about a custom classifier.
Retrieve information about a custom classifier.
Retrieve information about a custom classifier.
Retrieve information about a custom classifier.
GET /v3/classifiers/{classifier_id}
(visualRecognition *VisualRecognitionV3) GetClassifier(getClassifierOptions *GetClassifierOptions) (result *Classifier, response *core.DetailedResponse, err error)
(visualRecognition *VisualRecognitionV3) GetClassifierWithContext(ctx context.Context, getClassifierOptions *GetClassifierOptions) (result *Classifier, response *core.DetailedResponse, err error)
ServiceCall<Classifier> getClassifier(GetClassifierOptions getClassifierOptions)
getClassifier(params)
get_classifier(self,
classifier_id: str,
**kwargs
) -> DetailedResponse
get_classifier(classifier_id:)
func getClassifier(
classifierID: String,
headers: [String: String]? = nil,
completionHandler: @escaping (WatsonResponse<Classifier>?, WatsonError?) -> Void)
GetClassifier(string classifierId)
GetClassifier(Callback<Classifier> callback, string classifierId)
Request
Instantiate the GetClassifierOptions
struct and set the fields to provide parameter values for the GetClassifier
method.
Use the GetClassifierOptions.Builder
to create a GetClassifierOptions
object that contains the parameter values for the getClassifier
method.
Path Parameters
The ID of the classifier.
Query Parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.
WithContext method only
A context.Context instance that you can use to specify a timeout for the operation or to cancel an in-flight request.
The GetClassifier options.
The ID of the classifier.
The getClassifier options.
The ID of the classifier.
parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.The ID of the classifier.
parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.The ID of the classifier.
parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.The ID of the classifier.
parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.The ID of the classifier.
parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.The ID of the classifier.
parameters
Release date of the API version you want to use. Specify dates in YYYY-MM-DD format. The current version is
2018-03-19
.The ID of the classifier.
curl -u "apikey:{apikey}" "{url}/v3/classifiers/dogs_1477088859?version=2018-03-19"
IamAuthenticator authenticator = new IamAuthenticator( apikey: "{apikey}" ); VisualRecognitionService visualRecognition = new VisualRecognitionService("2018-03-19", authenticator); visualRecognition.SetServiceUrl("{url}"); var result = service.GetClassifier( classifierId: "dogs_1477088859" ); Console.WriteLine(result.Response);
package main import ( "encoding/json" "fmt" "github.com/IBM/go-sdk-core/core" "github.com/watson-developer-cloud/go-sdk/visualrecognitionv3" ) func main() { authenticator := &core.IamAuthenticator{ ApiKey: "{apikey}", } options := &visualrecognitionv3.VisualRecognitionV3Options{ Version: "2018-03-19", Authenticator: authenticator, } visualRecognition, visualRecognitionErr := visualrecognitionv3.NewVisualRecognitionV3(options) if visualRecognitionErr != nil { panic(visualRecognitionErr) } visualRecognition.SetServiceURL("{url}") result, response, responseErr := visualRecognition.GetClassifier( &visualrecognitionv3.GetClassifierOptions{ ClassifierID: core.StringPtr("dogs_1477088859"), }, ) if responseErr != nil { panic(responseErr) } b, _ := json.MarshalIndent(result, "", " ") fmt.Println(string(b)) }
IamAuthenticator authenticator = new IamAuthenticator("{apikey}"); VisualRecognition visualRecognition = new VisualRecognition("2018-03-19", authenticator); visualRecognition.setServiceUrl("{url}"); GetClassifierOptions getClassifierOptions = new GetClassifierOptions.Builder("dogs_1477088859").build(); Classifier classifier = visualRecognition.getClassifier(getClassifierOptions).execute().getResult(); System.out.println(classifier);
const VisualRecognitionV3 = require('ibm-watson/visual-recognition/v3'); const { IamAuthenticator } = require('ibm-watson/auth'); const visualRecognition = new VisualRecognitionV3({ version: '2018-03-19', authenticator: new IamAuthenticator({ apikey: '{apikey}', }), serviceUrl: '{url}', }); const getClassifierParams = { classifierId: 'dogs_1477088859', }; visualRecognition.getClassifier(getClassifierParams) .then(response => { const classifier = response.result; console.log(JSON.stringify(classifier, null, 2)); }) .catch(err => { console.log('error:', err); });
import json from ibm_watson import VisualRecognitionV3 from ibm_cloud_sdk_core.authenticators import IAMAuthenticator authenticator = IAMAuthenticator('{apikey}') visual_recognition = VisualRecognitionV3( version='2018-03-19', authenticator=authenticator ) visual_recognition.set_service_url('{url}') classifier = visual_recognition.get_classifier( classifier_id='dogs_1477088859').get_result() print(json.dumps(classifier, indent=2))
require "json" require "ibm_watson/authenticators" require "ibm_watson/visual_recognition_v3" include IBMWatson authenticator = Authenticators::IamAuthenticator.new( apikey: "{apikey}" ) visual_recognition = VisualRecognitionV3.new( version: "2018-03-19", authenticator: authenticator ) visual_recognition.service_url = "{url}" classifier = visual_recognition.get_classifier( classifier_id: "dogs_1477088859" ) puts JSON.pretty_generate(classifier.result)
let authenticator = WatsonIAMAuthenticator(apiKey: "{apikey}") let visualRecognition = VisualRecognition(version: "2018-03-19", authenticator: authenticator) visualRecognition.serviceURL = "{url}" visualRecognition.getClassifier(classifierID: "{classifier_id}") { response, error in guard let classifier = response?.result else { print(error?.localizedDescription ?? "unknown error") return } print(classifier) }
var authenticator = new IamAuthenticator( apikey: "{apikey}" ); while (!authenticator.CanAuthenticate()) yield return null; var visualRecognition = new VisualRecognitionService("2018-03-19", authenticator); visualRecognition.SetServiceUrl("{url}"); Classifier getClassifierResponse = null; VisualRecognitionService.GetClassifier( callback: (DetailedResponse<Classifier> response, IBMError error) => { getClassifierResponse = response.Result; Log.Debug("VisualRecognitionServiceV3", "CheckClassifierStatus: {0}", getClassifierResponse.Status); }, classifierId: "dogs_1477088859" ); while (getClassifierResponse == null) yield return null;
Response
Information about a classifier.
ID of a classifier identified in the image.
Name of the classifier.
Unique ID of the account who owns the classifier. Might not be returned by some requests.
Training status of classifier.
Possible values: [
ready
,training
,retraining
,failed
]Whether the classifier can be downloaded as a Core ML model after the training status is
ready
.If classifier training has failed, this field might explain why.
Date and time in Coordinated Universal Time (UTC) that the classifier was created.
Classes that define a classifier.
Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Might not be returned by some requests. Identical to
updated
and retained for backward compatibility.Date and time in Coordinated Universal Time (UTC) that the classifier was most recently updated. The field matches either
retrained
orcreated
. Might not be returned by some requests.
Information about a classifier.
ID of a classifier identified in the image.
Name of the classifier.
Unique ID of the account who owns the classifier. Might not be returned by some requests.
Training status of classifier.
Possible values: [
ready
,training
,retraining
,failed
]Whether the classifier can be downloaded as a Core ML model after the training status is
ready
.If classifier training has failed, this field might explain why.
Date and time in Coordinated Universal Time (UTC) that the classifier was created.
Classes that define a classifier.
The name of the class.
Classes
Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Might not be returned by some requests. Identical to
updated
and retained for backward compatibility.Date and time in Coordinated Universal Time (UTC) that the classifier was most recently updated. The field matches either
retrained
orcreated
. Might not be returned by some requests.
Information about a classifier.
ID of a classifier identified in the image.
Name of the classifier.
Unique ID of the account who owns the classifier. Might not be returned by some requests.
Training status of classifier.
Possible values: [
ready
,training
,retraining
,failed
]Whether the classifier can be downloaded as a Core ML model after the training status is
ready
.If classifier training has failed, this field might explain why.
Date and time in Coordinated Universal Time (UTC) that the classifier was created.
Classes that define a classifier.
The name of the class.
classes
Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Might not be returned by some requests. Identical to
updated
and retained for backward compatibility.Date and time in Coordinated Universal Time (UTC) that the classifier was most recently updated. The field matches either
retrained
orcreated
. Might not be returned by some requests.
Information about a classifier.
ID of a classifier identified in the image.
Name of the classifier.
Unique ID of the account who owns the classifier. Might not be returned by some requests.
Training status of classifier.
Possible values: [
ready
,training
,retraining
,failed
]Whether the classifier can be downloaded as a Core ML model after the training status is
ready
.If classifier training has failed, this field might explain why.
Date and time in Coordinated Universal Time (UTC) that the classifier was created.
Classes that define a classifier.
The name of the class.
classes
Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Might not be returned by some requests. Identical to
updated
and retained for backward compatibility.Date and time in Coordinated Universal Time (UTC) that the classifier was most recently updated. The field matches either
retrained
orcreated
. Might not be returned by some requests.
Information about a classifier.
ID of a classifier identified in the image.
Name of the classifier.
Unique ID of the account who owns the classifier. Might not be returned by some requests.
Training status of classifier.
Possible values: [
ready
,training
,retraining
,failed
]Whether the classifier can be downloaded as a Core ML model after the training status is
ready
.If classifier training has failed, this field might explain why.
Date and time in Coordinated Universal Time (UTC) that the classifier was created.
Classes that define a classifier.
The name of the class.
classes
Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Might not be returned by some requests. Identical to
updated
and retained for backward compatibility.Date and time in Coordinated Universal Time (UTC) that the classifier was most recently updated. The field matches either
retrained
orcreated
. Might not be returned by some requests.
Information about a classifier.
ID of a classifier identified in the image.
Name of the classifier.
Unique ID of the account who owns the classifier. Might not be returned by some requests.
Training status of classifier.
Possible values: [
ready
,training
,retraining
,failed
]Whether the classifier can be downloaded as a Core ML model after the training status is
ready
.If classifier training has failed, this field might explain why.
Date and time in Coordinated Universal Time (UTC) that the classifier was created.
Classes that define a classifier.
The name of the class.
classes
Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Might not be returned by some requests. Identical to
updated
and retained for backward compatibility.Date and time in Coordinated Universal Time (UTC) that the classifier was most recently updated. The field matches either
retrained
orcreated
. Might not be returned by some requests.
Information about a classifier.
ID of a classifier identified in the image.
Name of the classifier.
Unique ID of the account who owns the classifier. Might not be returned by some requests.
Training status of classifier.
Possible values: [
ready
,training
,retraining
,failed
]Whether the classifier can be downloaded as a Core ML model after the training status is
ready
.If classifier training has failed, this field might explain why.
Date and time in Coordinated Universal Time (UTC) that the classifier was created.
Classes that define a classifier.
The name of the class.
classes
Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Might not be returned by some requests. Identical to
updated
and retained for backward compatibility.Date and time in Coordinated Universal Time (UTC) that the classifier was most recently updated. The field matches either
retrained
orcreated
. Might not be returned by some requests.
Information about a classifier.
ID of a classifier identified in the image.
Name of the classifier.
Unique ID of the account who owns the classifier. Might not be returned by some requests.
Training status of classifier.
Possible values: [
ready
,training
,retraining
,failed
]Whether the classifier can be downloaded as a Core ML model after the training status is
ready
.