Ingesting data through command line - Spark REST API
You can run the ibm-lh tool to ingest data into IBM® watsonx.data through the command line interface (CLI) using the IBM Analytics Engine (Spark) REST API. This CLI based ingestion uses REST endpoint to do the ingestion. This is the default mode of ingestion. The commands to run the ingestion job are listed in this topic.
Before you begin
- You must have the Administrator role and privileges in the catalog to do ingestion through the web console.
- Add and register IBM Analytics Engine (Spark). See Provisioning a Spark engine.
- Add storage for the target catalog. See Adding a storage-catalog pair.
- Create schema and table in the catalog for the data to be ingested. See Creating schemas and Creating tables.
Procedure
-
Set the mandatory environment variable
ENABLED_INGEST_MODEtoSPARKbefore starting an ingestion job by running the following command:export ENABLED_INGEST_MODE=SPARK -
Set the optional environment variables from the following as required before starting an ingestion job by running the following commands:
export IBM_LH_SPARK_EXECUTOR_CORES=1 export IBM_LH_SPARK_EXECUTOR_MEMORY=2G export IBM_LH_SPARK_EXECUTOR_COUNT=1 export IBM_LH_SPARK_DRIVER_CORES=1 export IBM_LH_SPARK_DRIVER_MEMORY=2GIf IBM Analytics Engine Serverless instance on IBM Cloud is registered as external Spark on watsonx.data, the Spark driver, executor vCPU and memory combinations must be in a 1:2, 1:4, or 1:8 ratio. See See Default limits and quotas for Analytics Engine instances.
Table 1 Environment variable name Description IBM_LH_SPARK_EXECUTOR_CORESOptional spark engine configuration setting for executor cores IBM_LH_SPARK_EXECUTOR_MEMORYOptional spark engine configuration setting for executor memory IBM_LH_SPARK_EXECUTOR_COUNTOptional spark engine configuration setting for executor count IBM_LH_SPARK_DRIVER_CORESOptional spark engine configuration setting for driver cores IBM_LH_SPARK_DRIVER_MEMORYOptional spark engine configuration setting for driver memory -
Run the following command to ingest data from a single or multiple source data files:
ibm-lh data-copy --target-table iceberg_data.ice_schema.ytab \ --source-data-files "s3://lh-ingest/hive/warehouse/folder_ingestion/" \ --user someuser@us.ibm.com \ --password **** \ --url https://us-south.lakehouse.dev.cloud.ibm.com/ \ --instance-id crn:v1:staging:public:lakehouse:us-south:a/fd160ae2ce454503af0d051dfadf29f3:25fdad6d-1576-4d98-8768-7c31e2452597:: \ --schema /home/nz/config/schema.cfg \ --engine-id spark214 \ --log-directory /tmp/mylogs \ --partition-by "<columnname1>, <columnname2> \Where the parameters used are listed as follows:
Table 2 Parameter Description --engine-idEngine id of Spark engine when using REST API based SPARKingestion.--instance-idIdentify unique instances. In SaaS environment, CRN is the instance id. --log-directoryThis option is used to specify the location of log files. --partition-byThis parameter supports the functions for years, months, days, hours for timestamp in the partition-bylist. If a target table already exist, thepartition-byshall not make any effect on the data.--passwordPassword of the user connecting to the instance. In SaaS, API key to the isntance is used. --schemaUse this option with value in the format path/to/csvschema/config/file. Use the path to a schema.cfg file which specifies header and delimiter values for CSV source file or folder. --source-data-filesPath to s3 parquet or CSV file or folder. Folder paths must end with “/”. File names are case sensitive. --target-tableTarget table in format <catalogname>.<schemaname>.<tablename>.--userUser name of the user connecting to the instance. --urlBase url of the location of IBM® watsonx.data cluster. ibm-lh data-copyreturns the value0when ingestion job is completed successfully. When ingestion job has failed,ibm-lh data-copyreturns a non0value. -
Run the following command to get the status of the ingest job:
ibm-lh get-status --job-id <Job-id> --instance-id --url --user --passwordWhere the parameter used is listed as follows:
Table 3 Parameter Description --job-id<Job id>Job id is generated when REST API or UI based ingestion is initiated. This job id is used in getting the status of ingestion job. This parameter is used only used with ibm-lh get-statuscommand. The short command for this parameter is-j. -
Run the following command to get the history of all ingestion jobs:
ibm-lh get-status --all-jobs --instance-id --url --user --passwordWhere the parameter used is listed as follows:
Table 4 Parameter Description --all-jobsThis all-jobs parameter gives the history of all ingestion jobs. This parameter is used only used with ibm-lh get-statuscommand.get-statusis supported with ibm-lh only in the interactive mode of ingestion.