Using the machine learning model
Leverage a machine learning model that you trained with Knowledge Studio by making it available to other Watson applications.
You can deploy or export a machine learning model. A dictionary or Natural Language Understanding pre-annotator can only be used to pre-annotate documents within Knowledge Studio.
Before you can deploy a model for use by a service, you must have a subscription to the service. IBM Watson services are hosted on IBM Cloud®, which is the cloud platform for IBM. For more information about the platform, see What is IBM Cloud?. To subscribe to one of the IBM Watson services, create an account from the IBM Cloud website.
For some of the services, you must know details about the service instance that you plan to deploy to, such as the IBM Cloud space name and service instance name. The space and instance name information is available from the IBM Cloud Services page.
You can also pre-annotate new documents with the machine learning model. See Pre-annotating documents with the machine learning model for details.
Deploying a machine learning model to IBM Watson Discovery
When you are satisfied with the performance of the model, you can deploy a version of it to IBM Watson Discovery. This feature enables your applications to use the deployed machine learning model to enrich the insights that you get from your data to include the recognition of concepts and relations that are relevant to your domain.
About this task
When you deploy the machine learning model, you select the version of it that you want to deploy.
Procedure
To deploy a machine learning model, complete the following steps:
-
Log in as a Knowledge Studio administrator or project manager, and select your workspace.
-
Select Machine Learning Model > Versions.
-
Choose the version of the model that you want to deploy.
If there is only one working version of the model, create a snapshot of the current model. This versions the model, which enables you to deploy one version, while you continue to improve the current version. The option to deploy does not appear until you create at least one version.
Each version can be deployed to any number of service instances. Each deployed instance of a model version is given a unique Model ID, but is identical in all other ways.
-
Click Deploy, choose to deploy it to Discovery, and then click Next.
-
Select the IBM Cloud space and instance. If necessary, select a different region.
-
Click Deploy.
-
The deployment process might take a few minutes. To check the status of the deployment, click Status on the Versions tab next to the version that you deployed.
If the model is still being deployed, the status indicates "deploying". After deployment completes, the status changes to "available" or "deployed" if the deployment was successful, or "error" if problems occurred.
Once available, make a note of the model ID (model_id).
What to do next
To use the model, you must export the model, and then import it into Discovery.
-
Select Machine Learning Model > Versions.
-
Click Export current model.
If you have a Lite plan subscription, no export option is available.
The model is saved as a ZIP file, and you are prompted to download the file.
-
Download the file to your local system.
-
From the Discovery service, follow the steps to create a Machine Learning enrichment, which include uploading the ZIP file. For more details, see Machine Learning models in the Discovery v2 documentation.
If you're using a Discovery v1 service instance, you must provide the model ID when it is requested during the Discovery service enrichment configuration process. For more information, see Integrating your custom model with the Discovery tooling in the Discovery v1 documentation.
Deploying a machine learning model to IBM Watson Natural Language Understanding
When you are satisfied with the performance of the model, you can deploy a version of it to IBM Watson Natural Language Understanding. This feature enables your applications to use the deployed machine learning model to analyze semantic features of text input, including entities and relations.
Before you begin
You must have a Natural Language Understanding service to deploy to. And you must know the IBM Cloud space and instance names that are associated with the service. If you do not remember the space or instance names, find them by logging in to IBM Cloud. If you do not have an IBM Cloud account, sign up for one.
About this task
When you deploy the machine learning model, you select the version of it that you want to deploy.
Procedure
To deploy a machine learning model to the Natural Language Understanding service, complete the following steps:
-
Log in as a Knowledge Studio administrator or project manager, and select your workspace.
-
Select Machine Learning Model > Versions.
-
Choose the version of the model that you want to deploy.
If there is only one working version of the model, create a snapshot of the current model. This versions the model, which enables you to deploy a version, while you continue to improve the current version. The option to deploy does not appear until you create at least one version.
Each version can be deployed to any number of service instances. Each deployed instance of a model version is given a unique Model ID but is identical in all other ways.
-
Click Deploy, choose to deploy it to Natural Language Understanding, and then click Next.
-
Select the IBM Cloud space and instance. If necessary, select a different region.
-
Click Deploy.
-
The deployment process might take a few minutes. To check the status of the deployment, click Status on the Versions tab next to the version that you deployed. If the model is still being deployed, the status indicates "publishing". After deployment completes, the status changes to "available" if the deployment was successful, or "error" if problems occurred.
Once available, make a note of the model ID (model_id). You will provide this ID to the Natural Language Understanding service to enable the service to use your custom model.
What to do next
You can list deployed model in the Natural Language Understanding service instance by calling the following API method.
curl --user "apikey:{apikey}" "{url}/v1/models?version=2018-11-16"
Any deployed models will be returned in an array similar to the following one:
{
"models": [
{
"workspace_id": "{workspace_id}",
"version_description": "{version_description}",
"version": "{version}",
"status": "available",
"name": null,
"model_id": "10:7abc4c2f-5846-3334-b8f7-af5a6fad3398",
"language": "en",
"description": null,
"created": "2018-11-28T17:08:00.000000Z"
}
]
}
The {workspace_id}
, {version_description}
, and {version}
will all match the information listed on the Versions page of your Knowledge Studio service instance.
To use the deployed model, you must specify the model ID of your custom model in the entities.model
parameter of an analyze call.
You can use the model with the Natural Language Understanding GET /analyze
request to extract the following features:
-
entities
The following command finds the entities that are present in the sentence that is passed by using the text parameter:
curl --user "apikey":"{apikey}" "{url}/v1/analyze?version=2018-09-21" --request POST --header "Content-Type: application/json" -d '{"text": "Vehicle 1, a 1995 Honda Civic was traveling north on a two lane undivided roadway, negotiating a curve to the left on an upgrade.", "features": { "entities": { "model": "your-model-id-here" } } }'
The service returns a JSON object of instances that it finds of entity types that are defined in the custom model:
{ "language": "en", "entities": [ { "type": "MANUFACTURER", "text": "Honda", "count": 1 }, { "type": "MODEL", "text": "Civic", "count": 1 }, { "type": "VEHICLE", "text": "Vehicle 1", "count": 1 }, { "type": "STRUCTURE", "text": "two lane undivided roadway", "count": 1 }, { "type": "STRUCTURE", "text": "curve", "count": 1 }, { "type": "MODEL_YEAR", "text": "1995", "count": 1 }, { "type": "CONDITION", "text": "negotiating", "count": 1 } ], "language": "en" }
-
relations
The following command finds the relationships that are present in the sentence that is passed by using the text parameter:
curl --user "apikey":"{apikey}" "{url}/v1/analyze?version=2018-09-21" --request POST --header "Content-Type: application/json" -d '{"text": "Vehicle 1, a 1995 Honda Civic was traveling north on a two lane undivided roadway, negotiating a curve to the left on an upgrade.", "features": { "relations": { "model": "your-model-id-here" } } }'
The service returns a JSON object of instances that it finds of relation types that are defined in the custom model:
{ "relations": [ { "type": "timeOf", "sentence": "Vehicle 1, a 1995 Honda Civic was traveling north on a two lane undivided roadway, negotiating a curve to the left on an upgrade.", "score": 0.954254, "arguments": [ { "text": "1995", "entities": [ { "type": "Date", "text": "1995" } ] }, { "text": "Honda Civic", "entities": [ { "type": "SportingEvent", "text": "Honda Civic" } ] } ] }, { "type": "locatedAt", "sentence": "Vehicle 1, a 1995 Honda Civic was traveling north on a two lane undivided roadway, negotiating a curve to the left on an upgrade.", "score": 0.40592, "arguments": [ { "text": "negotiating", "entities": [ { "type": "EventMeeting", "text": "negotiating" } ] }, { "text": "roadway", "entities": [ { "type": "Facility", "text": "roadway" } ] } ] } ], "language": "en" }
For more information, see the Natural Language Understanding documentation.
Deploying the same model version to multiple services
If you wish to deploy a specific version of the same machine learning model to multiple IBM Watson service instances, navigate to the Versions page and click the Deploy link on the row of the version that you want to deploy to an additional service.
Undeploying models
If you want to undeploy a model or find a model ID, view the Deployed Models page.
Procedure
To undeploy models or find model IDs:
- Launch Knowledge Studio.
- From the Settings menu in the top right menu bar, select Manage deployed models.
- From the list of deployed models, find the model you want to view or undeploy.
- To undeploy the model, from the last column of that row, click Undeploy model.
- To find the model ID, see the Model ID column.
Alternatively, you can undeploy models from the Versions pages for rule-based models and machine learning models.
Deleting a version
If you wish to delete a specific version a same machine learning model, navigate to the Versions page and click the Delete link on the row of the version that you want to delete. Note: The Delete model version link is only active if there are no deployed models associated with it. Undeploy all associated models before deleting the a version.
Leveraging a machine learning model in IBM Watson Explorer
Export the trained machine learning model so it can be used in IBM Watson Explorer.
Before you begin
If you choose to identify relation types and annotate them, then you must define at least two relation types, and annotate instances of the relationships in the ground truth before you export the model. Defining and annotating only one relation type can cause subsequent issues in IBM Watson Explorer, release 11.0.1.0.
About this task
Now that the machine learning model is trained to recognize entities and relationships for a specific domain, you can leverage it in IBM Watson Explorer.
Watch a brief video that illustrates how to export a model and use it in IBM Watson Explorer.
Procedure
To leverage a machine learning model in IBM Watson Explorer, complete the following steps.
-
Log in as a Knowledge Studio administrator or project manager, and select your workspace.
-
Select Machine Learning Model > Versions.
-
Click Export current model.
If you have a Lite plan subscription, no export option is available.
The model is saved as a ZIP file, and you are prompted to download the file.
-
Download the file to your local system.
-
From the IBM Watson Explorer application, import the model.
You can then map the model to a machine learning model in Watson Explorer Content Analytics. After you perform the mapping step, when you crawl documents, the model finds instances of the entities and relations that your model understands. For more information about how to import and configure the model in IBM Watson Explorer, see the technical document that describes the integration: Using machine-learning annotators from Knowledge Studio in Watson Explorer.