FAQs for File Storage for Classic
How can I tell which of my File Storage for Classic volumes are encrypted?
Look at your list of File Storage for Classic in the customer portal. You can see a lock icon next to the volume name for the volumes that are encrypted.
How can I find the correct mount point for my File Storage for Classic?
All encrypted File Storage for Classic volumes that are provisioned in the enhanced data centers have a different mount point than nonencrypted volumes. To ensure that you're using the correct mount point, view the mount point information in
the Volume Details page in the console. You can also access the correct mount point through an API call: SoftLayer_Network_Storage::getNetworkMountAddress()
.
How many volumes can I provision?
By default, you can provision a combined total of 700 Block and File Storage for Classic volumes. To increase your limit, contact Support. For more information, see Managing storage limits.
How many server instances can share the use of a provisioned File Storage for Classic volume?
The default limit for number of authorizations per file volume is 64. The limit includes all subnet, host, and IP authorizations combined. To increase this limit, contact Support. For more information, see Creating support cases.
How many File Storage for Classic volumes can be attached to a single host?
That depends on what the host operating system can handle, it’s not something that IBM Cloud® limits. Refer to your OS Documentation for limits on the number of file shares that can be mounted.
How many files and directories are allowed for specific file volume sizes? What is the maximum number of inodes allowed per volume size?
The number of files a volume can contain is determined by how many inodes it has. An inode is a data structure that contains information about files. Volumes have both private and public inodes. Public inodes are used for files that are visible to the customer and private inodes are used for files that are used internally by the storage system. The maximum number of files setting is 2 billion. However, this maximum value can be configured only with volumes of 7.8 TB or larger. The maximum number of inodes that can be configured on a volume is calculated by taking the total allocated volume size in KB and dividing it by 4. Any volume of 9,000 GB or larger reaches the maximum limit at 2,040,109,451 inodes.
Volume Size | Inodes |
---|---|
20 GB | 4,980,731 |
40 GB | 9,961,461 |
80 GB | 19,922,935 |
100 GB | 24,903,679 |
250 GB | 62,259,189 |
500 GB | 124,518,391 |
1,000 GB | 249,036,795 |
2,000 GB | 498,073,589 |
3,000 GB | 747,110,397 |
4,000 GB | 996,147,191 |
8,000 GB | 1,992,294,395 |
12,000 GB | 2,040,109,451 |
16,000 GB | 2,040,109,451 |
I ordered a File Storage for Classic volume in the wrong data center. Is it possible to move or migrate it to another data center?
You need to order a new File Storage for Classic share in the correct data center, and then cancel the File Storage for Classic device that you ordered in the incorrect location. You can create a duplicate of your share, and cancel the parent share. For more information, see Creating and managing duplicate volumes.
When the share is canceled, the request is followed by a 24-hour reclaim wait period. You can still see the storage volume in the console during those 24 hours. Billing for the volume stops immediately. When the reclaim period expires, the data is destroyed and the volume is removed from the console, too.
Measuring IOPS
IOPS is measured based on a load profile of 16-KB blocks with random 50 percent reads and 50 percent writes. Workloads that differ from this profile might experience poor performance. To improve performance, you can try adjusting the host settings or enabling Jumbo frames.
What happens when I use a smaller IO size for measuring performance?
Maximum IOPS can be obtained even if you use smaller IO sizes. However, the throughput is less in this case. For example, a volume with 6000 IOPS has the following throughput at various IO sizes:
- 16 KB * 6000 IOPS == ~93.75 MB/sec
- 8 KB * 6000 IOPS == ~46.88 MB/sec
- 4 KB * 6000 IOPS == ~23.44 MB/sec
Is the allocated IOPS enforced by instance or by volume?
IOPS is enforced at the volume level. Said differently, two hosts connected to a volume with 6000 IOPS share that 6000 IOPS.
Does the volume need to be pre-warmed to achieve the expected throughput?
Pre-warming is not needed. You can observe the specified throughput immediately upon provisioning the volume.
Can more throughput be achieved if a faster Ethernet connection is used?
Throughput limits are set at the volume level. That limit cannot be increased by using a faster Ethernet connection. However, with a slower Ethernet connection, your bandwidth can be a potential bottleneck.
Do firewalls and security groups impact performance?
It's best to run storage traffic on a VLAN, which bypasses the firewall. Running storage traffic through software firewalls increases latency and adversely affects storage performance.
How do I route File Storage for Classic traffic to its own VLAN interface and bypass a firewall?
To enact this good practice, complete the following steps.
-
Provision a VLAN in the same data center as the host and the File Storage for Classic device.
-
Provision a secondary private subnet to the new VLAN.
-
Trunk the new VLAN to the private interface of the host. This action momentarily disrupts the network traffic on the host while the VLAN is being trunked to the host.
-
Create a network interface.
- On the Linux® host, create a 802.11q interface. Choose one of the unused secondary IP address from the newly trunked VLAN and assign that IP address, subnet mask, and gateway to a new 802.11q interface.
- In VMware®, create a new VMkernel network interface (vmk) and assign the unused secondary IP address, subnet mask, and gateway IP from the newly trunked VLAN to the new vmk interface.
-
Add a new persistent static route on the host to the target NFS subnet.
-
Authorize the new IP to access the storage.
-
For mounting instructions, depending on your host's operating system, follow the appropriate link.
What performance latency can be expected from the File Storage for Classic?
Target latency within the storage is less than one ms. The storage is connected to compute instances on a shared network, so the exact performance latency depends on the network traffic during the operation.
What happens to the data when File Storage for Classic shares are deleted?
IBM Cloud® File Storage for Classic presents file shares to customers on physical storage that is wiped before any reuse.
When you delete a File Storage for Classic volume, that data immediately becomes inaccessible. All pointers to the data on the physical disk are removed. If you later create a new volume in the same or another account, a new set of pointers is assigned. The account can't access any data that was on the physical storage because those pointers are deleted. When new data is written to the disk, any inaccessible data from the deleted volume is overwritten.
IBM guarantees that data deleted cannot be accessed and that deleted data is eventually overwritten and eradicated. Further, when you delete a storage volume, the share must be overwritten before the storage is made available again, either to you or to another customer.
When IBM decommissions a physical drive, the drive is destroyed before disposal. The decommissioned drives are unusable and any data on them is inaccessible.
Customers with special requirements for compliance such as NIST 800-88 Guidelines for Media Sanitization can perform the data sanitization procedure before they delete their storage.
Why is the Cancel action unavailable in the console?
The cancellation process for this storage device is in progress so the Cancel action is no longer available. The volume remains visible for at least 24 hours until it is reclaimed. The UI indicates that it’s inactive and the status "Cancellation pending" is displayed. The minimum 24-hour waiting period gives you a chance to void the cancel request if needed.
Which NFS versions are supported?
Both NFSv3 and NFSv4.1 are supported in the IBM Cloud® environment. NFSv4.2 is not supported.
Use the NFSv3 protocol when possible. NFSv3 supports safe asynchronous writes and is more robust at error handling than the previous NFSv2. It supports 64-bit file sizes and offsets, allowing clients to access more than 2 GB of file data.
NFSv3 natively supports no_root_squash
that allows root clients to retain root permissions on the NFS share. You can enable this feature in NFSv4.1, by editing the domain information and running the rpcidmapd
or a similar
service. For more information, see Implementing no_root_squash for NFS.
When File Storage for Classic is used in a VMware® deployment, NFSv4.1 might be the better choice for your implementation. For more information about the different features of each version and what is supported by VMware®, see NFS Protocols and ESXi.
Can VAAI and HW acceleration be enabled in our VMware deployments?
No. Currently, vStorage for API Array Integration and Hardware acceleration are not supported.
What happens to the drives that are decommissioned from the cloud data center?
When drives are decommissioned, IBM destroys them before they are disposed of. The drives become unusable. Any data that was written to that drive becomes inaccessible.
What is the difference between Controlled Failover and Immediate Failover?
Controlled Failover does one last sync before it breaks the mirror process. The Immediate Failover immediately breaks the mirror and activates the replica volume.
My storage appears offline or read-only. Why did it happen and how do I fix it?
In a couple of scenarios a host (bare metal or VM) might lose connection to the storage briefly and as a result, the host considers that storage read-only to avoid data corruption. Most of the time the loss of connectivity is network-related but the status of the storage remains read-only from the host's perspective even when the network connection is restored.
This issue can be observed with virtual drives of VMs on a network-attached VMware® datastore (NFS protocol). To resolve, confirm that the network path between the Storage and the Host is clear, and that no maintenance or outage is in progress. Then, unmount and mount the storage volume. If the volume is still read-only, restart the host.
For mounting instructions, see the following topics.
- Mounting File Storage for Classic in CentOS
- Mounting File Storage for Classic on ESXi hosts
- Mounting File Storage for Classic on Red Hat Linux®
- Mounting File Storage for Classic on Ubuntu
To prevent this situation from recurring, the customer might consider the following actions:
- Increasing disk timeout values. For more information, see VMware® KB - Increasing the disk timeout values for a Linux® 2.6 virtual machine.
- Adding guest OS tunings. For more information, see NetApp's recommendations for guest OS tunings for a VMware® vSphere deployment.
- Reconfiguring Host systems that use NFSv4.1 for NFSv3 for increased resilience during maintenance operations.
- Discontinuing session trunking on host systems that run VMware® ESXi. Session trunking is not supported and is known to cause disruptions.
I expanded the volume size of my File Storage for Classic by using the Cloud console, but the size on my server is still the same. How do I fix it?
To see the expanded volume size, mount and remount your existing File Storage for Classic disk on your server. In a VMware® implementation, rescan storage to refresh the VMware® datastore and show the new volume size.
How do I reconnect storage after a chassis swap?
Complete the following tasks to connect storage after a swap.
- Remove the authorization (revoke access) from the storage device, and then authorize the host again.
- Discover the storage devices again, with the new credentials that were gained from the reauthorization.
For more information, see Managing File Storage for Classic.
How do I disconnect my storage device from a host?
Complete the following steps to disconnect a volume from a host.
- Unmount the device.
- Revoke access for the host from the storage device in the Cloud console.
- Remove auto mounts from NFS connections.
How do endurance and performance storage differ?
Endurance and Performance are provisioning options that you can select for storage devices. In short, Endurance IOPS tiers offer predefined performance levels whereas you can fine-tune those levels with the Performance tier. The same devices are used but delivered with different options. For more information, see File Storage Features.
Can I connect a File Storage for Classic share to Windows?
No. You cannot mount IBM Cloud® File Storage for Classic shares on Microsoft Windows. NFS in a Windows environment is not supported by IBM Cloud®.
File Storage for Classic shares can be mounted on Linux operating systems or as a VMware® datastore on ESXi hosts. For more information about mounting File Storage for Classic volumes, see the following topics:
Can I mount a single storage device to multiple hosts within IBM Cloud?
Yes, you can use this setup because NFS is a file-aware protocol.
Can I increase inodes for my NFS volume?
Typically, when volumes are provisioned, they are allotted the maximum inode count for the size that you ordered. The maximum inode count grows automatically as the volume grows. If the inodes count does not increase after you expanded a volume, submit a support case.
I am unable to upgrade storage. What can affect the ability to upgrade or expand storage?
The following situations can affect the ability to upgrade or expand storage.
- If the original volume is the Endurance 0.25 tier, then the IOPS tier can’t be updated.
- The permissions that you have in the Cloud console can be a factor. For more information, see the topics within User roles and permissions.
Are File Storage for Classic volumes thin or thick provisioned?
All Block and File Storage for Classic services are thin-provisioned. This method is not modifiable.
My billing ID changed, what does this mean?
You might notice that your Storage volumes are now billed as "Endurance Storage Service” or "Performance Storage Service" instead of "Enterprise Storage". You might also have new options in the console, such as the ability to adjust IOPS or increase capacity. IBM Cloud® strives to continuously improve storage capabilities. As hardware gets upgraded in the data centers, storage volumes that reside in those data centers are also upgraded to use all enhanced features. The price that you pay for your Storage volume does not change with this upgrade.
How durable is File Storage for Classic?
When you store your data in File Storage for Classic, it's durable, highly available, and encrypted. The durability target for a single Availability zone is 99.999999999% (11 9's). For more information, see Availability and Durability of File Storage for Classic.
What's the average uptime for File Storage for Classic?
When you store your data in File Storage for Classic, it's durable, highly available, and encrypted. File Storage is built upon best-in-class, proven, enterprise-grade hardware and software to ensure high availability and uptime. To ensure that the availability target of 99.999% (five 9's) is met, the data is stored redundantly across multiple physical disks on HA paired nodes. Each storage node has multiple paths to its own Solid-State Drives and its partner node's SSDs as well. This setup protects against path failure, and also controller failure because the node can still access its partner's disks seamlessly. For more information, see Availability and Durability of File Storage for Classic.
Can I get storage performance metrics (IOPS or latency) from the Support team?
IBM Cloud® does not provide storage performance IOPS and latency metrics. Customers are expected to monitor their own File Storage for Classic devices by using their choice of third-party monitoring tools.
The following examples are utilities that you might consider to use to check performance statistics.
sysstat
- System performance tools for the Linux® operating system.typeperf
- Windows command that writes performance data to the command window or to a log file.esxtop
- A command-line tool that gives administrators real-time information about resource usage in a VMware® vSphere environment. It can monitor and collect data for all system resources: CPU, memory, disk, and network.
What is the difference between a replica volume, a dependent and an independent duplicate volume?
You can create a replica or a duplicate volume by using a snapshot of your volume. Replication and cloning use one of your snapshots to copy data to a destination volume. However, that is where the similarities end.
Replication keeps your data in sync in two different locations. Only one volume of the pair (primary volume or replica volume) can be active at a time. The replication process automatically copies information from the active volume to the inactive volume based on the replication schedule. For more information about replica volumes, see Replicating data.
Duplication creates a copy of your volume based on a snapshot in the same availability zone as the parent volume. The duplicate volume inherits the capacity and performance options of the original volume by default and has a copy of the data up to the point-in-time of a snapshot. The duplicate volume can be dependent or independent from the original volume, and it can be manually refreshed with data from the parent volume. You can adjust the IOPS or increase the volume size of the duplicate without any effect on the parent volume.
-
A dependent duplicate volume does not go through the conversion of becoming independent, and can be refreshed at any time after it is created. It locks the original snapshot so that the snapshot cannot be deleted while the dependent duplicate exists. The parent volume cannot be canceled while the dependent duplicate volume exists. If you want to cancel the parent volume, you must either cancel the dependent duplicate first or convert it to an independent duplicate.
-
An independent duplicate is superior to the dependent duplicate in most regards, but it cannot be refreshed immediately after creation because of the lengthy conversion process. It can take up to several hours based on the size of the volume. For example, it might take up to a day for a 12-TB volume. However, after the separation process is complete, the data can be manually refreshed by using another snapshot of the original parent volume.
For more information about duplicates, see Creating and managing duplicate volumes.
Feature | Replica | Dependent duplicate | Independent duplicate |
---|---|---|---|
Created from a snapshot | |||
Location of copied volume | Remote Availability Zone | Same Availability Zone | Same Availability Zone |
Supports failover | |||
Different Size and IOPS | |||
Auto-synced with parent volume | |||
On-demand refresh from parent volume | [1] | [2] | |
Separated from parent volume |
How long does it take to convert a dependent duplicate into an independent volume?
The conversion process can take some time to complete. The bigger the volume, the longer it takes to convert it. In a 12-TB volume, it might take 24 hours. You can check on the progress in the console or from the CLI.
-
in the console, go to Classic Infrastructure. Click Storage > File Storage for Classic, then locate the volume in the list. The conversion status is displayed on the Overview page.
-
From the CLI, use the following command.
slcli file duplicate-convert-status <dependent-vol-id>
The output looks similar to the following example.
slcli file duplicate-convert-status 370597202 Username Active Conversion Start Timestamp Completed Percentage SL02SEVC307608_74 2022-06-13 14:59:17 90