Generating data from a taxonomy for InstructLab
Data generation is the process of automatically generating synthetic questions and answers based on the questions and answers that you included in the QNA files. The process Red Hat AI InstructLab uses to generate data focuses on content quality and relevance.
Learn more about the data generation process from Red Hat.
Prerequisites
Generating data by using the console
-
In the console, open the Red Hat AI InstructLab service.
-
Click InstructLab Projects > your project > Training data > Generate.
-
Enter an alphanumeric name for the training data and select the taxonomy to use.
-
Optional: Review the estimated cost that is provided before you start the data generation process.
-
Click Generate. The state is
queued, thenrunning. -
Wait for the state to be
completed. When the data generation is completed, asynthetic_datadirectory that contains log files is created Object Storage bucket. You can review these logs for troubleshooting or verification.
Importing your own training data in the console
You can import your own previously generated data to supplement data generation in InstructLab. You might want to import your own data if you need to do one or more of the following.
- Import one or many knowledge and skills documents when generating data.
- Combine multiple training data runs into one.
- Generate data, download it, then manipulate a subsection of your data and regenerate.
- Combine your previously generated data with newly imported data.
- Import data, generate training data, then combine that data with another data generation run.
- Combine replay buffer and imported data from your taxonomy. This feature is available via the API or CLI only.
- Import data and generate training data from your taxonomy. This feature is available via the API or CLI only.
To import your own data, you can reference previous data generation runs, import files from your Object Storage bucket, or upload files from your local machine.
Complete the following steps to import your own training data.
-
In the console, open the Red Hat AI InstructLab service.
-
Click InstructLab Projects > your project > Training data > Import.
-
Enter an alphanumeric name for your data.
-
Select one of the following options.
-
Object Storage: Select existing files in your Object Storage bucket.
- Select the instance and bucket where your existing data is stored.
- Click Next.
- Select the files that you want to import.
-
Upload Files: Select files from your local machine. Note that there is a 40 Mb limit for uploading files.
- Select an Object Storage instance and bucket or create a new instance and bucket to store your data.
- Grant InstructLab Writer permissions for the bucket.
- Optional: Storage settings. Specify the following additional details for how to store your data.
- Bucket file path.
- Directory within the bucket.
- Object Storage instance name.
- Resource group.
- Object Storage bucket name.
- Bucket resiliency.
- Click Next.
- Select the knowledge and skills files that you want to upload from your local machine.
-
-
Click Import.
Merging training data in the console
You might generate data in smaller, manageable chunks, so that you can avoid timeouts or system limits. You can then merge these smaller data sets into a single data set for training.
Complete the following steps to merge data in the console.
-
In the console, open the Red Hat AI InstructLab service.
-
Click InstructLab Projects > your project > Training data.
-
Select up to 20 of your training data entries from the list and click Merge.
-
Enter an alphanumeric name for your data.
-
Select the Object Storage instance and bucket where you want to store the merged data.
-
Click Create.
Generating data by using the CLI
-
List your taxonomies and make a note of the taxonomy you want to use.
ibmcloud ilab taxonomy listExample output.
id name taxonomy_path 669a88c9488ee7b95ce8fe05 test-tax taxonomy.tar.gz -
Generate data from your taxonomy. Note the ID for the data to use in the next step. Use alphanumeric characters in the name.
ibmcloud ilab data generate [--name NAME] [--taxonomy-id TAXONOMY-ID]Example command.
ibmcloud ilab data generate --name testdata --taxonomy-id 669a88c9488ee7b95ce8fe05Example output.
id 66a268c170dcb21150050e8e name test-data state queued status created_at 2024-07-19T15:40:29.000Z taxonomy_id 669a88c9488ee7b95ce8fe05 -
Check the details of your data generation. Include the ID for the data. The state is
queued, thenrunning. Wait for the state to becompleted. When the state iscompleted, in the Object Storage bucket, asynthetic_datadirectory is created with logs for troubleshooting.ibmcloud ilab data get --id DATA_IDExample
data getcommand.ibmcloud ilab data get --id 66a268c170dcb21150050e8eExample output.
id 66a268c170dcb21150050e8e name test-data state running status Generating data for taxonomy path compositional_skills->STEM->math->area: 12% 12/100 (total qna processed 1/147) created_at 2024-07-19T15:40:29.000Z taxonomy_id 669a88c9488ee7b95ce8fe05 -
Optional: When the state is
completed, you can review metrics, such as token estimates to calculate the estimated cost.Example
data getcommand with the--output jsonoptionibmcloud ilab data get --id 66a268c170dcb21150050e8e --output jsonExample JSON output
{ "created_at": "2025-12-08T15:40:29.000Z", "data_metrics": { "samples": { "knowledge": 30, "skills": 70, "total": 100 }, "tokens": { "data_leaf_nodes": { "compositional_<taxonomy_path>": 26196, "knowledge_<taxonomy_path>": 1228930, }, "data_tokens_total": 5993486, "training_estimated": 411435913, "training_phases": { "phase_1_knowledge": 1389992, "phase_2_skills": 410045921 } } }, "id": "66a268c170dcb21150050e8e", "last_signal_at": "2025-12-08T17:20:32.000Z", "name": "test-data", "state": "completed", "status": "completed", "taxonomy_id": "669a88c9488ee7b95ce8fe05" }
Importing your own training data by using the CLI
You might want to import your own data for one or more of the following reasons.
- Import one or many knowledge and skills documents when generating data.
- Combine multiple training data runs into one.
- Generate data, download it, then manipulate a subsection of your data and regenerate.
- Combine your previously generated data with newly imported data.
- Import data, generate training data, then combine that data with another data generation run.
- Combine replay buffer and imported data from your taxonomy. This feature is available via the API or CLI only.
- Import data and generate training data from your taxonomy. This feature is available via the API or CLI only.
You can import your own training data to supplement data generation in InstructLab. To import your own previously generated data, specify one or more of the following:
- The data generation IDs of previous runs.
- An Object Storage bucket that contains
.jsonor.jsonlknowledge and skills files.
Complete the following steps to import your data.
- List your taxonomies and make a note of the taxonomy you want to use.
Example output.ibmcloud ilab taxonomy listid name taxonomy_path 669a88c9488ee7b95ce8fe05 test-tax taxonomy.tar.gz - If you have previously generated data that you want to use, list your data and make a note of the UUIDs you want to use. You can include up to 20 data sources.
ibmcloud ilab data list - Generate data from your taxonomy. Note the ID for the data to use in the next step. Use alphanumeric characters in the name.
Example command to include multiple internal IDs (data sources) for data generation.ibmcloud ilab data generate [--name NAME] [--taxonomy-id TAXONOMY-ID] [--internal-ids INTERNAL-IDs]ibmcloud ilab data generate --name testdata --taxonomy-id 65005b67-7de4-4216-b23c-ed4342f99c88 --internal-ids 8c6b9224-a4f1-4649-907c-0f11d14cfc59,299ee20c-0b04-4d8e-ad12-a3d98feece40
For more examples, see the next section: Example commands for importing your own training data.
Example commands for importing your own training data
Review the following example commands for importing your own training data or adding knowledge and skills files to a data generation job.
Example command to include multiple internal IDs.
ibmcloud ilab data generate --name testdata --taxonomy-id 65005b67-7de4-4216-b23c-ed4342f99c88 --internal-ids 8c6b9224-a4f1-4649-907c-0f11d14cfc59,299ee20c-0b04-4d8e-ad12-a3d98feece40
Example command to combine multiple previously generated data sources (internal IDs) as well as .jsonl or .json files from an Object Storage bucket.
ibmcloud ilab data generate \
--name testdata \
--taxonomy-id 669a88c9488ee7b95ce8fe05 \
--internal-ids 8c6b9224-a4f1-4649-907c-0f11d14cfc59,299ee20c-0b04-4d8e-ad12-a3d98feece40 \
--knowledge-paths PATH \
--skills-paths PATH \
--skills-knowledge-cos-bucket STRING \
--skills-knowledge-cos-bucket-endpoint ENDPOINT
Example command to combine previously generated data with .json or .jsonl knowledge and skills files stored in an Object Storage bucket. You can also optionally specify an output Object Storage bucket to store the
generated data (SDG) output.
ibmcloud ilab data generate \
--name testdata \
--taxonomy-id 669a88c9488ee7b95ce8fe05 \
--internal-ids 8c6b9224-a4f1-4649-907c-0f11d14cfc59 \
--knowledge-paths PATH \
--skills-paths PATH \
--skills-knowledge-cos-bucket STRING \
--skills-knowledge-cos-bucket-endpoint ENDPOINT
--output-cos-bucket-string STRING \
--output-cos-bucket-endpoint ENDPOINT
Example command to merge data by specifying their internal IDs.
ibmcloud ilab data generate \
--name testdata \
--internal-ids 8c6b9224-a4f1-4649-907c-0f11d14cfc59,299ee20c-0b04-4d8e-ad12-a3d98feece40 \
--output-cos-bucket-string STRING \
--output-cos-bucket-endpoint ENDPOINT
Example command to merge previously generated data by specifying their internal IDs as well as include skills and knowledge from an Object Storage bucket.
ibmcloud ilab data generate \
--name testdata \
--internal-ids 8c6b9224-a4f1-4649-907c-0f11d14cfc59 \
--knowledge-paths PATH \
--skills-paths PATH \
--skills-knowledge-cos-bucket STRING \
--skills-knowledge-cos-bucket-endpoint ENDPOINT
--output-cos-bucket-string STRING \
--output-cos-bucket-endpoint ENDPOINT
Example command to merge skills and knowledge from an Object Storage bucket.
ibmcloud ilab data generate \
--name testdata \
--knowledge-paths PATH \
--skills-paths PATH \
--skills-knowledge-cos-bucket STRING \
--skills-knowledge-cos-bucket-endpoint ENDPOINT
--output-cos-bucket-string STRING \
--output-cos-bucket-endpoint ENDPOINT
Generating data by using the API
-
List your taxonomies and make a note of the taxonomy you want to use.
Example command.
curl -X 'GET' \ 'https://us-east.instructlab.ibm.com/v1/taxonomies' \ -H 'accept: application/jsonExample output.
{ "taxonomies": [ { "id": "202a03c4-dcf1-432a-82b7-abecb2e019f7", "name": "example-taxonomy-name-1", "taxonomy_path_cos": "taxonomies/taxonomy.tar.gz", "created_at": "2024-10-23T02:58:50.000Z" } ] } -
Generate data from your taxonomy. Note the ID for the data to use in the next step. Use alphanumeric characters in the name.
Example command.
curl -X 'POST' \ 'https://us-east.instructlab.ibm.com/v1/data' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "name": "example-data-1", "taxonomy_id": "202a03c4-dcf1-432a-82b7-abecb2e019f7" }'Example output.
{ "id": "add785e6-a8c3-4f5f-ab89-c506a3f115da", "name": "example-data-1", "state": "", "status": "queued", "created_at": "2024-10-23T02:58:50.000Z", "last_signal_at": "2025-12-08T17:20:32.000Z", "taxonomy_id": "202a03c4-dcf1-432a-82b7-abecb2e019f7", "data_metrics": { "samples": { "additionalProp1": 1, "additionalProp2": 2, "additionalProp3": 3 } } } -
Check the details of your data generation. Include the ID for the data. The state is
queued, thenrunning. Wait for the state to becompleted.Example command.
curl -X 'GET' \ 'https://us-east.instructlab.ibm.com/v1/data/add785e6-a8c3-4f5f-ab89-c506a3f115da' \ -H 'accept: application/json'Example output.
{ "id": "add785e6-a8c3-4f5f-ab89-c506a3f115da", "name": "example-data-1", "state": "", "status": "queued", "created_at": "2024-10-23T02:58:50.000Z", "last_signal_at": "2025-12-08T17:20:32.000Z", "taxonomy_id": "202a03c4-dcf1-432a-82b7-abecb2e019f7", "data_metrics": { "samples": { "additionalProp1": 1, "additionalProp2": 2, "additionalProp3": 3 } } }
When the state is completed, in the Object Storage bucket, a synthetic_data directory is created with logs for troubleshooting.
What's in my Object Storage bucket after generating data?
After you generate data, your Object Storage bucket contains a synthetic_data directory with the following files.
Artifacts- These files contain the samples on each leaf node. These are not used for training the model, but are provided for readability and can be used to see if a QNA is generating the expected number of samples.
Logs- These files contain the Red Hat AI InstructLab execution logs and system details.
knowledge_train_msgs.jsonlandskills_train_msgs.jsonl- These are the Phase 1 and Phase 2 training files and contain samples used for training the model.
To understand why and how your data gets generated, see the SDG FAQs community doc.
Next steps
After you've generated data from your taxonomy, you can begin training your model.