Performance
IBM Cloud® Databases for Redis deployments deployments can be both manually scaled to your usage, or configured to autoscale under certain resource conditions. If you are tuning the performance of your deployment, consider a few factors.
Monitoring your deployment
Databases for Redis deployments offer an integration with the IBM Cloud® Monitoring service for basic monitoring of resource usage on your deployment. Many of the available metrics, like disk usage and IOPS, are presented to help you configure autoscaling on your deployment. Observing trends in your usage and configuring the autoscaling to respond to them can help alleviate performance problems before your databases become unstable due to resource exhaustion.
Memory policies
By default, deployments are configured with a noeviction
policy. All data is kept in memory until the maxmemory
limit is reached and Redis returns an error if the memory limit is exceeded. The maxmemory
is set to 80% of a data node's available memory, so your node doesn't run out of system resources.
You can scale the amount of memory to accommodate more data, and you can configure the maxmemory
setting to tune memory usage. The Redis documentation has some good information on memory behavior and tuning maxmemory
.
You can also configure your deployment to use Redis as a cache, allowing Redis to evict data out of memory once the memory limit is reached.
Disk IOPS
The number of Input-Output Operations per second (IOPS) is limited by the type of storage volume. Storage volumes for Databases for Redis deployments are provisioned on Block Storage Endurance Volumes in the 10 IOPS per GB tier. By default, a deployment starts with persistence enabled. It's possible for very busy databases to exceed the IOPS for the disk size, and increasing disk can alleviate a performance bottleneck.