Introduction

Storage is an integral part of any infrastructure. It is used to store the files backing your virtual machines. There are different types of storage that can be incorporated into a vSphere infrastructure, and these types are determined based on a variety of factors, such as the type of disks used, the type of storage protocol used, and the type of connectivity used. The most common way to refer to a type of storage presented to a VMware environment is based on the protocol used and the connection type.

VMware supports the following types of storage based on the protocol and connection type in use:

  • Fiber Channel (FC) storage: This connects over the FC SAN fabric network. It uses the FC protocol to encapsulate the SCSI commands. Hosts connect to the FC network using an FC Host Bus Adapter (FC-HBA). At the core of the FC network are the fabric switches that enable connecting the hosts and storage arrays to the fiber channel network.
  • FC over Ethernet (FCoE): This connects over an Ethernet network. Hosts connect using a Converged Network Adapter (CNA). FC frames are encapsulated in Ethernet frames. FCoE does not use TCP/IP for transporting FC frames. FCoE is gaining prominence in most modern data centers implementing a converged infrastructure.
  • Network Attached Storage (NAS): This connects over the IP network and hence is easier to implement in an existing infrastructure. Unlike FC and FCoE, this is not a lossless implementation. As the SCSI commands are sent over the TCP/IP network, they are prone to experience packet loss due to various reasons. Although this behavior does not break anything, it will have an impact on the performance when compared to FC or FCoE.
  • iSCSI: Internet SCSI allows you to send SCSI commands over an IP network to a storage system that supports the use of this protocol.
  • Network File System (NFS): NFS is a distributed filesystem protocol that allows you to share access to files over the IP network. Unlike iSCSI, FC, FCoE, or DAS, this is not block storage protocol. VMware supports NFS version 3.
  • Direct Attached Storage (DAS): This is used for local storage.

Keep in mind that FC, FCoE, and iSCSI are used to present block storage devices to ESXi, whereas NFS presents file storage. The key difference here is that the block storage can be presented in a raw format with no filesystem on it; file storage is nothing but a network folder mount on an already existing filesystem.

There are four other common terms that we use when dealing with storage in a VMware environment, namely LUN, datastore, VMFS, and NFS. The following points will introduce you to these terms and what they represent in a VMware environment:

  • LUN: When storage is presented to an ESXi host, the space for it is carved from a pool in the storage array. Each of the carved-up containers of disk blocks is called a logical unit and is uniquely represented by a Logical Unit Number (LUN). The concept of a LUN is used when you present block storage. It is on this LUN that you create a filesystem, such as VMFS. vSphere 6.5 supports up to 512 LUNs per ESXi host which is 2 x more than what was supported with the previous version. Also, the highest possible LUN ID is now 16383 as compared to 1023 with vSphere 6.0. However, the supported maximum size of a LUN is still capped at 64 TB.
  • Datastore: This is the vSphere term used to refer to a storage volume presented to an ESXi. The volume can be a VMFS volume on a LUN or an NFS mount. All files that make up a virtual machine are stored in a datastore. With the datastore being a managed object, most common file operations such as create, delete, upload, and download is possible. Keep in mind that you can't edit a configuration file directly from the datastore browser as it doesn't integrate with a text editor. For instance, if you are required to edit a configuration file like the .vmx, then the file should be downloaded, edited, and re-uploaded. You can create up to 512 VMFS datastores and 256 NFS datastores (mounts) per ESXi host
  • VMFS volume: A block LUN presented from an FC/iSCSI/DAS array can be formatted using the VMware's proprietary filesystem called VMFS. VMFS stands for Virtual Machine File System. The current version of VMFS is version 6. VMFS will let more than one host have simultaneous read/write access to the volume. To make sure that a VM or its files are not simultaneously accessed by more than one ESXi host, VMFS uses an on-disk locking mechanism called distributed locking. To place a lock on a VMFS volume, vSphere will either have to use an SCSI-2 reservation or if the array supports VAAI, it can use an Atomic Test and Set (ATS) primitive.
Like the previous version, VMFS version 6 supports the following:
  • A maximum volume size of 64 TB
  • A uniform block size of 1 MB
  • Smaller subblocks of 8 KB

An interesting improvement in VMFS 6, is automatic space reclamation using VAAI UNMAP. This can be configured any VMFS 6 datastore.

  • NFS volume: Unlike a VMFS volume, the NFS volume is not created by formatting a raw LUN with VMFS. NFS volumes are just mounts created to access the shared folders on an NFS server. The filesystems on these volumes are dependent on the type of NFS server. You can configure up to 256 NFS mounts per ESXi host.