IBM Cloud Docs
Getting started with Databases for Elasticsearch

Getting started with Databases for Elasticsearch

This tutorial guides you through the steps to quickly start using an IBM Cloud® Databases for Elasticsearch deployment by provisioning an instance, setting your Admin password and connecting to it.

Follow these steps to complete the tutorial:

Follow these steps to complete the tutorial:

Follow these steps to complete the tutorial:

Follow these steps to complete the tutorial:

Before you begin

Step 1: Choose your plan

Databases for Elasticsearch offers two different plans:

  • Databases for Elasticsearch Enterprise deploys the Basic version of Elasticsearch.

  • Databases for Elasticsearch Platinum deploys the Platinum version of Elasticsearch.

Both plans provide you with a fully managed and scalable Elasticsearch service, allowing you to focus on your applications and data rather than the underlying infrastructure.

Using APIs

Use the Cloud Databases API to work with your Databases for MongoDB instance. The resource controller API is used to provision an instance.

You will need an API key to perform actions via the API. Follow these steps to create an IBM Cloud API key that enables you to use the API to provision infrastructure into your account. You can create up to 20 API keys.

For security reasons, the API key is only available to be copied or downloaded at the time of creation. If the API key is lost, you must create a new API key.

Step 2: Provision through the console

  1. Log in to the IBM Cloud console.

  2. Click the Databases for Elasticsearch service in the catalog.

  3. Follow these steps to provision a Databases for Elasticsearch instance.

  4. When your instance is provisioned, click the instance name to view more information.

Step 2: Provision through the CLI

You can provision a Databases for Elasticsearch instance by using the CLI. If you don't already have it, you need to install the IBM Cloud CLI.

You can follow these steps to provision an Databases for Elasticsearch instance.

Step 2: Provision through the resource controller API

Follow these steps to provision an Databases for Elasticsearch instance using the Resource Controller API.

Step 2: Provision through Terraform

You need an API key to perform actions via Terraform. Follow these steps to create an IBM Cloud API key that enables Terraform to provision infrastructure into your account. You can create up to 20 API keys.

For security reasons, the API key is only available to be copied or downloaded at the time of creation. If the API key is lost, you must create a new API key.

Once you have an API Key, follow these steps to provision an Databases for Elasticsearch instance using Terraform.

Step 3: Set the admin password

The admin user

When you provision a Databases for Elasticsearch deployment, an admin user is automatically created.

Set the admin password before using it to connect.

Set the admin password through the UI

Set your admin password through the UI by selecting your instance from the IBM Cloud Resource list. Then, select Settings. Next, select Change Database Admin password.

Set the admin password through the CLI

Use the cdb user-password command from the IBM Cloud CLI Cloud Databases plug-in to set the admin password.

For example, to set the admin password for your deployment, use the following command:

ibmcloud cdb user-password <INSTANCE_NAME_OR_CRN> admin <NEWPASSWORD>

Set the admin password through the API

YOu can use the id parameter obtained in the response to Step 2 above with the Set specified user's password endpoint to set the admin password.

curl -X PATCH -H "Authorization: Bearer <TOKEN>" \
     -H 'Content-Type: application/json' \
     -d '{"password":"newrootpasswordsupersecure21"}' \
      "https://api.<REGION>.databases.cloud.ibm.com/v5/ibm/deployments/<DEPLOYMENT_ID>/users/database/admin"

The id parameter needs to be URL-encoded for the above API call to work.

Setting the admin password through Terraform

The admin password is passed in as one of the database resource parameters in the Terraform script. There is no need for any further action.

Step 4: Connect to your Databases for Elasticsearch instance

Connect to your deployment using Kibana, an open source tool that adds visualization capabilities to your Elasticsearch database. This tutorial runs Kibana in a Docker container by using the Kibana image from the Docker image repository.

Before you begin

To connect, Kibana needs the username, password, URL, and port for your Elasticsearch deployment. It also needs the Elasticsearch TLS certificate to access the database. To get this, copy the certificate information from the Endpoints section on the Overview page of your created Elasticsearch instance. Then, download the certificate to a local folder. You can use the name that is provided in the download, or your own file name.

Remember where you save the certificate on your file system. If you are running Kibana locally, not in Docker, then the certificate goes in $KIBANA_HOME/config/<filename>.

Set up Kibana

Before running the Docker container that includes Kibana, create a configuration file in the same folder with the downloaded Elasticsearch certificate from Step 1. The configuration file will contain some basic Kibana settings as follows.

Create a YAML file called kibana.yml. Inside the file, you need the following Kibana configuration settings:

elasticsearch.ssl.certificateAuthorities: "/usr/share/kibana/config/cacert"
elasticsearch.username: "admin"
elasticsearch.password: "<password>"
elasticsearch.hosts: ["https://<hostname:port>"]
server.name: "kibana"
server.host: "0.0.0.0"

The first setting, elasticsearch.ssl.certificateAuthorities, is the location where Docker will store the Elasticsearch certificate. It gets placed in this location when you first run Docker. You can change it to a location of your choice, but the example path is the Kibana’s configuration directory. Ensure that the certificate name (in our example "cert") within the kibana.yml and the certificate name file stored in Step 1 have the same name.

Next, is elasticsearch.username and elasticsearch.password. Use the deployment's admin username and password. Be sure that you set the admin password before trying to connect. For elasticsearch.hosts, enter the deployment's hostname and port, which is separated by a :.

Lastly, server.name is a machine-readable name for the Kibana instance and server.hosts is the host of the backend server where you can connect to Kibana in your web browser.

These settings are just a simplified example to get started. For more infortmation, see Configure Kibana.

If you are running Kibana locally, not in Docker, then the YAML file goes in $KIBANA_HOME/config/kibana.yml, where Kibana reads its configuration.

Run the Kibana Container

Now that the kibana.yml file is set up, use Docker to attach the YAML file and your certificate file to the Docker container, while pulling the <kibana_version> image from the Docker image repository.

Use an image with a version of Kibana that is compatible with the version of Elasticsearch that your deployment is running. Retrieve the Elasticsearch version from the https_endpoint API endpoint by using your preferred http client. For more information, see the Elasticsearch compatibility matrix.

Here is an example with curl. If you don't have the certificate that is installed, use the --insecure flag to disable peer verification. The <http_endpoint> can be found in your instance's Endpoints UI:

curl --cacert <path-to-cert> <https_endpoint>

Next, run the Docker command in your terminal to start the Kibana container.

docker container run -it --name kibana \
-v <path_to_config_folder_created_in_step_1>:/usr/share/kibana/config \
-p 5601:5601 docker.elastic.co/kibana/kibana:<kibana_version>

The Docker command has one volume that is attached with the -v flag. These are mounted to the Kibana container at the path /usr/share/kibana/config/, which is a configuration directory where Kibana looks for configuration files.

  • The -p specifies which port is exposed from the container, and the port you use to access Kibana.
  • The Kibana version should correspond to the version of Elasticsearch you are using.

When you run the command from your terminal, it downloads the Kibana Docker image and runs Kibana. Once Kibana has connected to your Databases for Elasticsearch deployment and is running successfully, you see the output in your terminal.

log   [01:19:31.839] [info][status][plugin:<kibana_version>] Status changed from uninitialized to green - Ready
log   [01:19:31.925] [info][status][plugin:elasticsearch@<kibana_version>] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log   [01:19:32.120] [info][status][plugin:timelion@<kibana_version>] Status changed from uninitialized to green - Ready
log   [01:19:32.134] [info][status][plugin:console@<kibana_version>] Status changed from uninitialized to green - Ready
log   [01:19:32.147] [info][status][plugin:metrics@<kibana_version>] Status changed from uninitialized to green - Ready
log   [01:19:33.132] [info][status][plugin:elasticsearch@<kibana_version>] Status changed from yellow to green - Ready
log   [01:19:33.378] [info][listening] Server running at http://0.0.0.0:5601

If you don't want to see the output of Kibana in your terminal, use the -d flag to detach the container.

Visit http://0.0.0.0:5601 in your browser to see Kibana. 0.0.0.0 is the server.host in kibana.yml and 5601 is the port that is exposed from the container. Once you go to the URL, a pop-up window prompts you for your username and password. Use the admin credentials, or any other credentials that you made, to access to your deployment. The credentials don't have to be the same username and password you provided in the kibana.yml file.

Next steps

For more information, see the Elasticsearch documentation.

Looking for more tools on managing your databases and data? You can connect to your deployment with the IBM Cloud CLI, the Cloud Databases CLI plug-in, or the Cloud Databases API.

If you plan to use Databases for Elasticsearch for your applications, check out Connecting an external application and Connecting an IBM Cloud application.

To ensure the stability of your applications and your database, check out High-availability and Performance.