IBM Cloud Docs
Configure AI model for Retrieval Service

Configure AI model for Retrieval Service

The Retrieval Service in watsonx.data enables administrators to configure which foundation model powers retrieval-based tasks such as text-to-SQL, question answering, and RAG. At the instance level, you can choose between granite (default), llama and gpt models, based on licensing and workload requirements.

Procedure

  1. Log in to watsonx.data console.

  2. From the navigation menu, select Configurations and click Retrieval service model tile.

  3. Under Retrival service section, choose one of the following available AI models:

    • granite-3-8b-instruct
    • llama-3-3-70b-instruct
    • gpt-oss-120b


    To use gpt-oss-120b with the Retrieval Service, you must first deploy the model in toronto region. For detailed instructions, see Deploying foundation models on demand (fast path).

  4. A confirmation dialog appears, click Select.

  5. Under Text to SQL, choose on of the following available AI models:

    • granite-3-8b-instruct
    • llama-3-3-70b-instruct
  6. A confirmation dialog appears, click Select.