IBM Cloud Docs
Edge environments for AI, IoT, and machine learning

Edge environments for AI, IoT, and machine learning

Create an IBM Cloud Satellite® location with Red Hat OpenShift clusters on compute infrastructure that is deployed at the edge near your Internet of Things (IoT) devices. Then, through your Satellite location, your apps can access a suite of IBM Cloud artificial intelligence (AI) and machine learning services to maximize the value of your data wherever the data is located.

Solving common edge workload challenges with IBM Cloud

Common challenges of edge workloads include training the machine learning models and using predictive model inferencing. With Satellite, you can access the IBM Cloud services that addresses these challenges where your edge workloads actually run.

Training a machine learning model
Training your machine learning model typically involves significant compute resources for memory, GPU, and storage. Instead of installing and managing training model software onto your compute infrastructure, you can add the compute infrastructure to a Satellite location. Then, you can access IBM Cloud Pak® for Data, which includes tools such as Watson Studio and IBM Watson® Machine Learning for data analysis and model training. By accessing these tools as cloud services, you simplify the installation and management of the software. You also can use these same cloud services across all your edge infrastructure, no matter the underlying infrastructure provider.
Model inferencing
Model inferencing is the task of using a trained model to make predictions, detect anomalies, and categorize data from your edge environment. Because of memory, storage, and latency requirements, model inferencing is most effectively run as near to your IoT sensors and other data sources as possible. You can create a Satellite location with managed Red Hat OpenShift clusters where your data is located in your edge environments. Then, you can set up a serverless programming model such as Red Hat™ OpenShift™ Serverless™ to provide a simplified programming model with a REST interface to query your trained model to produce a prediction.

Setting up your edge solution with Satellite

While you can set up many possible solutions to the challenges of your edge environment, you can use Satellite to provide a consistent, scalable experience across environments. An example setup is as follows.

  1. Set up machine learning and model training for your data.
  2. Deploy Satellite with a serverless component to your edge environment.
  3. Run model inferencing at the edge.

Step 1: Set up machine learning and model training for your data

As an AI model developer, you prepare your edge data with machine learning and AI tools in IBM Cloud. Before you begin, you must have access to Watson Studio and Machine Learning instances, such as in an IBM Cloud account or through IBM Cloud Pak for Data.

  1. Upload the training data to IBM Cloud Object Storage.
  2. Use Watson Studio and Machine Learning to pull the training data from IBM Cloud Object Storage, analyze the data, and train a model with TensorFlow, Keras, SciKit-Learn, or another popular machine learning algorithm.

The trained model is saved back to IBM Cloud Object Storage, so that the data does not take up storage space in your edge environment.

Step 2: Deploy Satellite with a serverless component to your edge environment

As the edge environment system administrator, you enable a serverless tool to simplify model inferencing at the edge.

  1. Create a Satellite location on your edge computing infrastructure.
  2. Create a managed Red Hat OpenShift cluster in the Satellite location.
  3. Access the Red Hat OpenShift web console.
  4. Using the OperatorHub, install the Red Hat OpenShift Serverless operator.
  5. Install the Knative Serving Operator.

You deployed Satellite with a serverless component.

Step 3: Run model inferencing at the edge

As the AI developer, run model inferencing on your edge data by using the serverless processing that the edge administrator set up.

  1. Download the trained model from IBM Cloud Object Storage to your local development environment.
  2. Create a Knative-compliant container image.
  3. Deploy the image to your Red Hat OpenShift Serverless processor that runs in your Satellite cluster. You can use the Red Hat OpenShift web console in the developer perspective, or use the kn command line tool for Satellite Serverless.

Now, you have a managed Satellite location that runs on your edge environment and performs on-demand model inferencing for your edge data through your AI-trained model and Red Hat OpenShift Serverless.