IBM Cloud Docs
Tuning the IBM base code model for watsonx Code Assistant for Red Hat Ansible Lightspeed

Tuning the IBM base code model for watsonx Code Assistant for Red Hat Ansible Lightspeed

Red Hat Ansible Lightspeed Standard plan

If you purchased a watsonx Code Assistant for Red Hat Ansible Lightspeed Standard plan, you can tune the IBM base code model on your data so that it generates code suggestions that are customized for your enterprise standards. You can use the watsonx Code Assistant for Red Hat Ansible Lightspeed tuning studio to create model experiments and deploy your models to shared spaces so you and your team can quickly generate reliable and accurate code.

Create a tuning experiment and upload your tuning data

Before you can tune the model on your Ansible data, you must convert your Ansible files to JSONL format by using the Red Hat Ansible content parser tool. This tool analyzes Ansible files in a local directory, GitHub repository, or an archive file and generates a JSONL file that is the tuning data set for tuning your model. For more information, see Configuring custom models.

To improve your model accuracy, provide at least 1000 samples in your JSONL file. A sample consists of an input (the context and the task name) and an output (the expected model output). For more information about verifying that your sample is well-formed, click example of a sample in Prepare your data.

  1. On the welcome page for your watsonx Code Assistant instance, click Tune a model and select a project from the menu.

    This option opens a simplified version of the watsonx Tuning Studio that is customized for watsonx Code Assistant for Red Hat Ansible Lightspeed.

  2. Provide a meaningful name and description for your experiment so you can easily identify the model after you deploy it.

  3. Click Create a tuning experiment. The data upload page opens.

  4. Upload your tuning data in JSONL format.

  5. Compare your data with the training data for the IBM base code model.

    After the file uploads, you can compare your data with the data for the IBM base code model data. This comparison shows you what modules from your data are not present in the base model data. Your model is tuned on these modules to improve the accuracy of code suggestions.

    • Click the eye icon by your JSONL file name to view your raw JSONL data.
    • Click the linked number of your Ansible module count to view metric details about your modules and samples. You can also see the differences and similarities between your experiment and the IBM base code model data.
    • Click the linked number of your Unique Ansible modules count to view the unique modules that are not represented in the IBM base code module. This screen also displays the percentage of the overall unique module count that each module comprises.

    All of this information helps you understand how code suggestions might improve after the model is tuned. Training data that includes many unique modules has the potential to substantially improve code suggestions for modules that the IBM base code module was not initially trained on.

Tune your model

  1. Click Start tuning.

    The tuning process starts. The progress indicator lists the elapsed time of your tuning.

    Customization takes time, especially with large quantities of samples. This step might take hours, not minutes.

    When your tuning job completes, you can see an assessment of the training loss of your tune. The training loss is a measure of how much a code suggestion diverges from the expected code suggestion. Typically, the training loss decreases as the number of tuning cycles increases. Look for a downward-sloping curve, which indicates that the model got better at generating the expected outputs across successive training cycles.

Training loss graph for tuned model
Training loss graph for tuned model

Deploy your model and obtain your model ID

Now that you see the difference that your experiment can make, you can deploy it and obtain the corresponding model ID for use in Visual Studio Code.

  1. Click Deploy tuned model.

  2. Specify a meaningful name and description for your deployment.

  3. Choose the deployment space for your model.

  4. Click Create.

    After your deployment is complete, the overview page for your model opens.

  5. Click the copy icon for your Model ID to copy the value.

Optional: Test your model in your local Visual Studio Code instance

You can test your model locally before you make it available to others in your organization.

  1. Open the settings for your Ansible Lightspeed for Visual Studio Code extension.

  2. Copy your Model ID into the Ansible > Lightspeed: Model ID Override field.

    You can now get code recommendations from your tuned model to test the accuracy of the model before you roll it out to your organization.

Make your tuned model available to others in your organization

When you're satisfied with your tuned model, you can add the Model ID to the Ansible Lightspeed Admin Portal to make it available to authorized users in your organization.

  1. On the overview page for your tuned model, click Open Ansible Lightspeed Admin Portal.

  2. Activate your model by pasting your Model ID value into the specified field in the Ansible Lightspeed Admin Portal.

    Activating your model enables it for your authorized users in the Ansible Lightspeed for VS Code extension.