IBM Cloud Docs
Getting started with File Storage for Classic

Getting started with File Storage for Classic

IBM Cloud® File Storage for Classic is persistent, fast, and flexible network-attached, NFS-based File Storage for Classic. In this network-attached storage (NAS) environment, you have total control over your file shares function and performance. File Storage for Classic shares can be connected to up to 64 authorized devices over routed TCP/IP connections for resiliency.

For more information about using File Storage for Classic with the IBM Cloud® Kubernetes Service, see Storing data on classic IBM Cloud File Storage.

Before you begin

File Storage for Classic volumes can be provisioned from 20 GB to 12 TB with two options:

  • Provision Endurance tiers that feature pre-defined performance levels and other features like snapshots and replication.
  • Build a high-powered Performance environment with allocated input/output operations per second (IOPS).

For more information about the File Storage for Classic offering, see What is IBM Cloud File Storage.

Provisioning considerations

IO size

The IOPS value for both Endurance and Performance is based on a 16-KB block size with a 50/50 read and write, 50/50 random and sequential workload. A 16-KB block is the equivalent of one write to the volume.

The IO size that is used by your application directly impacts the storage performance. If the IO size that is used by your application is smaller than 16 KB, the IOPS limit is realized before the throughput limit. Conversely, if the block size that is used by your application is larger than 16 KB, the throughput limit is realized before to the IOPS limit.

Table 1 shows examples of how IO size and IOPS affect the throughput. Average IO size x IOPS = Throughput in MB/s.
IO Size (KB) IOPS Throughput (MB/s)
4 1,000 4
8 1,000 8
16 1,000 16
32 500 16
64 250 16
128 128 16
512 32 16
1024 16 16

Authorized hosts

Another factor to consider is the number of hosts that are using your volume. When only a single host that is accessing the volume, it can be difficult to realize the maximum IOPS available, especially at extreme IOPS counts (10,000s).

The maximum IOPS for a file storage share is 48,000 IOPS. If your workload requires such high throughput, it would be best to configure at least a couple servers to access your volume to avoid a single-server bottleneck.

You can authorize up to 64 servers to access the file share. This limit includes all subnet, host, and IP authorizations combined. For more information about increasing this limit, see the FAQs.

Network connection

The speed of your Ethernet connection must be faster than the expected maximum throughput from your volume. Generally, don't expect to saturate your Ethernet connection beyond 70% of the available bandwidth. For example, if you have 6,000 IOPS and are using a 16-KB block size, the volume can handle approximately 94-MBps throughput. If you have a 1-Gbps Ethernet connection to your volume, it becomes a bottleneck when your servers attempt to use the maximum available throughput. It's because 70 percent of the theoretical limit of a 1-Gbps Ethernet connection (125 MB per second) would allow for 88 MB per second only.

To achieve maximum IOPS, adequate network resources need to be in place. Other considerations include private network usage outside of storage and host side and application-specific tunings, and other settings.

Storage traffic is to be isolated from other traffic types, and it is not to be directed through firewalls and routers. Keeping the storage traffic in a dedicated VLAN also helps preventing MTU mismatch when Jumbo frames are enabled. For more information, see Enabling Jumbo Frames.

Storage traffic is included in the total network usage of Public Virtual Servers. For more information about the limits that might be imposed by the service, see the Virtual Server Documentation.

NFS version

Both NFSv3 and NFSv4.1 are supported in the IBM Cloud® environment. Network File System (NFS) is a networking protocol for distributed file sharing. It allows remote hosts to mount file systems over a network and interact with those file systems as if they are mounted locally.

Use the NFSv3 protocol when possible. NFSv3 supports safe asynchronous writes and is more robust at error handling than the previous NFSv2. It supports 64-bit file sizes and offsets, allowing clients to access more than 2 GB of file data. NFSv3 natively supports no_root_squash that allows root clients to retain root permissions on the NFS share.

When File Storage for Classic is used in a VMware® deployment, NFSv4.1 might be the better choice for your implementation. For more information about the different features of each version and what is supported by VMware®, see NFS Protocols and ESXi.

Submitting your order

When you're ready to submit your order, you can place it in the Console, from the CLI, with the API or with Terraform. For more information about provisioning File Storage for VMware® deployments, see the architecture guide.

By default, you can provision a combined total of 700 Block and File Storage for Classic volumes. For more information, see Managing storage limits.

Connecting and configuring your new storage

When your provisioning request is complete, authorize your hosts to access the new storage and configure your connection. Depending on your host's operating system, follow the appropriate link.

Mounting File Storage for Classic shares on Windows OS is not supported.

Managing your storage

In the console, from the CLI, with the API or Terraform, you can manage various aspects of your File Storage for Classic such as host authorizations and cancellations. For more information, see Managing File Storage for Classic.

You can keep your data in sync in two different locations by using replication. Replication uses one of your snapshot schedules to automatically copy snapshots to a destination volume in a remote data center. The copies can be recovered in the remote site if a catastrophic event occurs or your data becomes corrupted. For more information, see Replication and Disaster Recovery – Replicating Data.