IBM Cloud Docs
Using the rule-based model

Using the rule-based model

Leverage a rule-based model that you created with Knowledge Studio by making it available to other Watson applications.

You can deploy a rule-based model to make it available for use in these services as an experimental feature.

Before a model can be deployed for use by a service, you must have a subscription to the service. IBM Watson services are hosted on IBM Cloud, which is the cloud platform for IBM. See What is IBM Cloud? for more information about the platform. To subscribe to one of the IBM Watson services, create an account from the IBM Cloud website.

For some of the services, you must know details about the service instance that you plan to deploy to, such as the IBM Cloud space name and service instance name. The space and instance name information is available from the IBM Cloud services page.

You can also pre-annotate new documents with the rule-based model. See Pre-annotating documents with the rule-based model for details.

Deploying a rule-based model to IBM Watson Discovery

Deploy the model to enable an application that uses the Discovery service to use the rule-based model to find and extract entities during document enrichment.

This is currently an experimental feature of the service.

Before you begin

You must have administrative access to a Watson Discovery service instance, and know the IBM Cloud space and instance names that are associated with it.

Procedure

To deploy a rule-based model to Watson Discovery, complete the following steps:

  1. Log in as a Knowledge Studio administrator or project manager, and select your workspace.

  2. Select the Rule-based Model > Versions > Rule-based Model tab.

  3. Choose the version of the model that you want to deploy.

    If there is only one working version of the model, save the current model for deployment by clicking Save for Deployment. This versions the model, which enables you to deploy one version, while you continue to improve the current version. Saving the version might take a few minutes. The option to deploy does not appear until after the version is created.

    Note: Each version can be deployed to only one service instance. If you want to deploy the same model to more than one instance, create a version for each instance.

  4. Click Deploy, choose to deploy it to Discovery, and then click Next.

  5. Provide the IBM Cloud space and instance. If necessary, select a different region.

  6. Click Deploy.

  7. The deployment process might take a few minutes. To check the status of the deployment, click Status on the Versions tab next to the version that you deployed.

    If the model is still being deployed, the status indicates "publishing". After deployment completes, the status changes to "available" if the deployment was successful, or "error" if problems occurred.

    Once available, make a note of the model ID (model_id).

What to do next

To use the model, you must export the model, and then import it into Discovery.

  1. Select the Rule-based Model > Versions > Rule-based Model tab.

  2. Click Export current model.

    If you have a Lite plan subscription, no export option is available.

    The model is saved as a PEAR file, and you are prompted to download the file. A PEAR (Processing Engine ARchive) file is the UIMA standard packaging format for UIMA components. The model is saved in PEAR format so it can be distributed and reused within UIMA applications.

  3. Download the file to your local system.

  4. From the Discovery service, follow the steps to create a Machine Learning enrichment, which include uploading the PEAR file. For more details, see Machine Learning models in the Discovery v2 documentation.

If you're using a Discovery v1 service instance, you must provide the model ID when it is requested during the Discovery service enrichment configuration process. For more information, see Integrating your custom model with the Discovery tooling in the Discovery v1 documentation.

Deploying a rule-based model to IBM Watson Natural Language Understanding

Uploading an advanced rules model to Natural Language Understanding is deprecated. As of June 10, 2021, you will not be able to deploy advanced rules models to Natural Language Understanding.

Deploy the rule-based model to enable an application that uses the Natural Language Understanding service to use the model to find and extract entities that are relevant to your domain.

Attention: This is currently an experimental feature of the service.

Before you begin

You must have administrative access to a Natural Language Understanding service instance, and know the IBM Cloud space and instance names that are associated with it.

Procedure

To deploy a rule-based model to Natural Language Understanding , complete the following steps:

  1. Log in as a Knowledge Studio administrator or project manager, and select your workspace.

  2. Select the Rule-based Model > Versions > Rule-based Model tab.

  3. Choose the version of the model that you want to deploy.

    If there is only one working version of the model, save the current model for deployment by clicking Save for Deployment. This versions the model, which enables you to deploy one version, while you continue to improve the current version. Saving the version might take a few minutes. The option to deploy does not appear until after the version is created.

    Note: Each version can be deployed to only one service instance. If you want to deploy the same model to more than one instance, create a version for each instance.

  4. Click Deploy, choose to deploy it to Natural Language Understanding, and then click Next.

  5. Provide the IBM Cloud space and instance. If necessary, select a different region.

  6. Click Deploy.

  7. The deployment process might take a few minutes. To check the status of the deployment, click Status on the Versions tab next to the version that you deployed.

    If the model is still being deployed, the status indicates "publishing". After deployment completes, the status changes to "available" if the deployment was successful, or "error" if problems occurred.

    Once available, make a note of the model ID (model_id). You will provide this ID to the Natural Language Understanding service to enable the service to use your custom model.

What to do next

To use the deployed model, you must specify the model ID of your custom model in the entities.model parameter.

You can use the model with the Natural Language Understanding GET /analyze request to extract entities.

See the Natural Language Understanding documentation for more details.

Undeploying models

If you want to undeploy a model or find a model ID, view the Deployed Models page.

Procedure

To undeploy models or find model IDs:

  1. Launch Knowledge Studio.
  2. From the Settings menu in the top right menu bar, select Manage deployed models.
  3. From the list of deployed models, find the model you want to view or undeploy.
  4. To undeploy the model, from the last column of that row, click Undeploy model.
  5. To find the model ID, see the Model ID column.

Alternatively, you can undeploy models from the Versions pages for rule-based models and machine learning models.

Leveraging a rule-based model in IBM Watson Explorer

Export the PEAR file that is produced when the rule-based model is created so it can be used in IBM Watson Explorer.

Procedure

To leverage a rule-based model in IBM Watson Explorer, complete the following steps.

  1. Log in as a Knowledge Studio administrator or project manager, and select your workspace.

  2. Select the Rule-based Model > Versions > Rule-based Model tab.

  3. Click Export current model.

    If you have a Lite plan subscription, no export option is available.

    The model is saved as a PEAR file, and you are prompted to download the file. A PEAR (Processing Engine ARchive) file is the UIMA standard packaging format for UIMA components. The model is saved in PEAR format so it can be distributed and reused within UIMA applications.

  4. Download the file to your local system.

  5. From the IBM Watson Explorer application, import the PEAR file.