Managing Storage
This chapter covers the following topics:
This chapter contains information related to Professional VMware vSphere 7.x (2V0-21.20), exam objectives 1.3.1, 1.3.2, 1.3.3, 1.3.4, 1.3.5, 1.4, 1.6.5, 1.9.1, 5.5, 7.4, 7.4.1, 7.4.2, and 7.4.3.
This chapter provides information on configuring and managing storage in a vSphere environment.
The “Do I Know This Already?” quiz allows you to assess whether you should study this entire chapter or move quickly to the “Exam Preparation Tasks” section. In any case, the authors recommend that you read the entire chapter at least once. Table 11-1 outlines the major headings in this chapter and the corresponding “Do I Know This Already?” quiz questions. You can find the answers in Appendix A, “Answers to the ‘Do I Know This Already?’ Quizzes and Review Questions.”
Table 11-1 “Do I Know This Already?” Section-to-Question Mapping
Foundation Topics Section |
Questions |
---|---|
Configuring and Managing vSAN |
1, 2 |
Managing Datastores |
3, 4 |
Storage DRS and SIOC |
5, 6 |
NVMe and PMem |
7, 8 |
Mulitpathing, Storage Policies, and vVols |
9, 10 |
1. You are configuring a hybrid vSAN cluster in a vSphere 7.0 environment. By default, what percentage of the flash space is used as a write buffer?
100%
70%
30%
0%
2. You are configuring vSAN in a vSphere 7.0 environment. Which of the following is supported when using Quickstart to configure a vSAN cluster?
ESXi 6.5.0
Network I/O Control (NIOC) Version 2
Hosts with dissimilar network configurations
Fibre Channel storage
3. You want to increase the size of a VMFS 6 datastore. Which one of the following statements is true?
You can extend the datastore by using available space on the storage device that is backing the datastore.
You can expand the datastore by using a separate storage device.
If the datastore is 100% full, you cannot increase its capacity.
You can expand the datastore by using available space on the storage device that is backing the datastore.
4. You are configuring NFS datastores for your vSphere 7.0 environment. Which one of the following statements is true?
You can use multiple IP addresses with any NFS datastore.
You can use multiple IP addresses with NFS Version 4.1 but not with NFS Version 3.
You can use multiple IP addresses with NFS Version 3 but not with NFS Version 4.1.
You cannot use multiple IP addresses with any version of NFS.
5. You are configuring SIOC and want to change the threshold it uses to begin prioritizing I/O based on shares. Which of the following options is the acceptable range?
1 to 100 ms
10 to 100 ms
30 to 100 ms
10 to 50 ms
6. You want to perform maintenance on a datastore that is a member of a data-store cluster. Which of the following actions should you take?
Right-click the host and choose Enter Maintenance Mode.
Right-click the datastore and choose Enter Maintenance Mode.
Right-click the host and choose Enter SDRS Maintenance Mode.
Right-click the datastore and choose Enter SDRS Maintenance Mode.
7. You need to configure an ESXi 7.0 host to access shared NVMe devices using RDMA over Converged Ethernet (RoCE) Version 2. Which steps should you take? (Choose three.)
Configure a VMkernel network adapter.
Add a software adapter to the host’s network adapters.
Navigate to Storage Adapters > RDMA Adapters and verify the VMkernel adapter bindings.
Navigate to Networking > RDMA Adapters and verify the VMkernel adapter bindings.
Add a software adapter to the host’s storage adapters.
8. In a vSphere 7.0 environment, you want to allow a virtual machine to use NVDIMMs as standard memory. What should you configure?
vPMemDisk
vPMem
NVMe-oF
RDMA
9. You want to set the path selection policy for a storage device managed by NMP such that it uses a preferred path. Which of the following policies should you choose?
FIXED
LB_RR
VMW_PSP_FIXED
VMW_PSP_RR
10. You are preparing to configure vVols in a vSphere 7.0 environment. Which of the following components should you configure in the storage system? (Choose two.)
Protocol endpoints
Storage containers
LUNs
Virtual volumes
This section provides information on configuring and managing vSAN clusters and vSAN datastores.
Before creating and configuring vSAN clusters, you should be aware of the following vSAN characteristics:
Multiple vSAN clusters can be configured in a single vCenter Server instance.
vSAN does not share devices with other vSphere features.
At a minimum, a vSAN cluster must include three hosts with capacity devices. In addition, it can include hosts with or without capacity devices.
For best results, use uniformly configured hosts in each vSAN cluster.
If a host contributes capacity, it must have at least one flash cache device and one capacity device.
In hybrid clusters, magnetic disks are used for capacity, and flash devices serve as a read cache and a write buffer. In a hybrid cluster, 70% of the flash space is used for the read cache, and 30% is used for the write buffer.
In all-flash clusters, one designated flash device is used as a write cache, and additional flash devices are used for capacity. No read cache is used. All read requests come directly from the flash pool capacity.
Only local (or directly attached) devices can participate in a vSAN cluster.
Only ESXi 5.5 Update 1 or later hosts can join a vSAN cluster.
Before you move a host from a vSAN cluster to another cluster, you need to make sure the destination cluster is vSAN enabled.
To use a vSAN datastore, an ESXi host must be a member of the vSAN cluster.
It is important to ensure that you meet all the vSAN hardware, cluster, software, and network requirements described in Chapter 2, “Storage Infrastructure.”
Quickstart, which is described in Chapter 10, “Managing and Monitoring Clusters and Resources,” allows you to quickly create, configure, and expand a vSAN cluster, using recommended default settings for networking, storage, and services. It uses the vSAN health service to help you validate and correct configuration issues using a checklist consisting of green messages, yellow warnings, and red failures.
To use Quickstart to configure a vSAN cluster, the hosts must use ESXi 6.0 Update 2 or later. The hosts must have a similar network configuration to allow Quickstart to configure network settings based on cluster requirements. You can use Quickstart to configure vSAN on an existing cluster by using the following procedure:
Step 1. In the vSphere Client, select the cluster in the Hosts and Clusters inventory and click Configure > Configuration > Quickstart.
Step 2. On the Cluster Basics card, click Edit, select the vSAN service, and optionally select other services, such as DRS and vSphere HA. Then click Finish.
Step 3. Click Add Hosts > Add and use the wizard to add hosts to the cluster.
Step 4. On the Cluster Configuration card, click Configure and use the wizard to configure the following:
On the Configure the Distributed Switches page, enter networking settings, including distributed switches, port groups, and physical adapters.
On the vMotion Traffic page, enter the vMotion IP address information.
On the Storage Traffic page, enter the storage IP address information.
On the Advanced Options page, provide vSAN cluster settings. Optionally, provide settings for DRS, HA, and EVC.
On the Claim Disks page, select disks on each host to claim for vSAN cache and capacity.
Optionally, on the Create Fault Domains page, define fault domains for hosts that can fail together.
On the Ready to Complete page, verify the cluster settings and click Finish.
Note
If you are running vCenter Server on a host, the host cannot be placed into Maintenance Mode as you add it to a cluster using the Quickstart workflow. The same host also can be running a Platform Services Controller. All other virtual machines on the host must be powered off.
Note
Distributed switches with Network I/O Control (NIOC) 2 cannot be used with vSAN Quickstart.
While it is recommended that all of the ESXi hosts in a vSAN cluster contribute storage to that vSAN cluster, it is not required, and ESXi hosts without any capacity can be added to and make use of the vSAN cluster. This is possible provided that the following requirements are met:
There must be at least three ESXi hosts in the vSAN cluster, and they must all contribute storage, or the cluster cannot tolerate host and/or device failures.
ESXi 5.5 Update 1 or higher must be used on all the hosts in the cluster.
If a host is being moved from one vSAN cluster to another, vSAN must be enabled on the destination cluster.
ESXi hosts must be members of the vSAN cluster to access the vSAN data-store (regardless of whether they are contributing storage to the vSAN cluster).
You can use the following procedure to manually enable vSAN:
Step 1. Prepare a VMkernel network adapter on each participating host.
In the vSphere Client, select a host in the inventory pane and navigate to Networking > VMkernel Adapters.
Click the Add Networking icon.
Use the wizard to configure the adapter’s network settings and to enable vSAN.
Step 2. In the inventory pane, right-click a data center and select New Cluster.
Step 3. Provide a name for the cluster.
Step 4. Optionally, configure other cluster settings, such as DRS, vSphere HA, and EVC.
Step 5. Add hosts to the cluster.
Step 6. Navigate to Configure > vSAN > Services and click Configure.
Step 7. Select one of the following configuration types:
Single Site Cluster
Two Host Cluster
Stretched Cluster
Click Next.
Step 8. In the next wizard page, optionally configure the following:
Enable Deduplication and Compression on the cluster.
Enable Encryption and select a KMS.
Select the Allow Reduced Redundancy checkbox to enable encryption or deduplication and compression on a vSAN cluster that has limited resources.
Click Next.
Step 9. On the Claim Disks page, select the disks for use by the cluster and click Next.
Step 10. Follow the wizard to complete the configuration of the cluster, based on the fault tolerance mode:
For a two-host vSAN cluster: Choose a witness host for the cluster and claim disks for the witness host.
For a stretched cluster: Define fault domains for the cluster, choose a witness host, and claim disks for the witness host.
If you selected fault domains: Define the fault domains for the cluster.
Step 11. On the Ready to Complete page, click Finish.
Note
When claiming disks for each host that contributes storage to a vSAN cluster, select one flash device for the cache tier and one or more devices for the capacity tier.
You can modify the settings of an existing vSAN cluster by using the following procedure:
Step 1. In the vSphere Client, select the cluster in the inventory pane and navigate to Configure > vSAN > Services.
Step 2. Click Edit.
Step 3. Optionally modify the following:
Deduplication and compression
vSAN encryption
vSAN performance service
iSCSI target
Advanced Settings > Object Repair Timer
Advanced Settings > Site Read Locality for stretched clusters
Thin swap provisioning
Large cluster support for up to 64 hosts
Automatic rebalance
Step 4. Click Apply.
You need a vSAN license to use it beyond the evaluation period. The license capacity is based on the total number of CPUs in the hosts participating in the cluster. The vSAN license is recalculated whenever ESXi hosts are added to or removed from the vSAN cluster.
The Global.Licenses privilege is required on the vCenter Server. You can use the following procedure to assign a vSAN license to a cluster:
Step 1. In the vSphere Client, select the vSAN cluster in the inventory pane.
Step 2. On the Configure tab, right-click the vSAN cluster and choose Assign License.
Step 3. Select an existing license and click OK.
Note
You can use vSAN in Evaluation Mode to explore its features for 60 days. To continue using vSAN beyond the evaluation period, you must license the cluster. Some advanced features, such as all-flash configuration and stretched clusters, require a license that supports the feature.
When you enable vSAN on a cluster, a vSAN datastore is created. You can use the following procedure to review the capacity and other details of a vSAN datastore:
Step 1. In the vSphere Client, navigate to Home > Storage.
Step 2. Select the vSAN datastore.
Step 3. On the Configure tab, review the following:
Capacity (total capacity, provisioned space, and free space)
Datastore capabilities
Policies
A vSAN datastore’s capacity depends on the capacity devices per host and the number of hosts in the cluster. For example, if a cluster includes eight hosts, each having seven capacity drives, where each capacity drive is 2 TB, then the approximate storage capacity is 8 × 7 × 2 TB = 112 TB.
Some capacity is allocated for metadata, depending on the on-disk format version:
On-disk format Version 1.0 adds approximately 1 GB overhead per capacity device.
On-disk format Version 2.0 adds overhead that is approximately 1% to 2% of the total capacity.
On-disk format Version 3.0 and later adds overhead that is approximately 1% to 2% of the total capacity plus overhead for checksums used by deduplication and compression (approximately 6.2% of the total capacity).
You can enable vSphere HA and vSAN on the same cluster but with some restrictions. The following ESXi requirements apply when using vSAN and vSphere HA together:
ESXi Version 5.5 Update 1 or later must be used on all participating hosts.
The cluster must have a minimum of three ESXi hosts.
The following networking differences apply when using vSAN and vSphere HA together:
The vSphere HA traffic flows over the vSAN network rather than the management network.
vSphere HA uses the management network only when vSAN is disabled.
Before you enable vSAN on an existing vSphere HA cluster, you must first disable vSphere HA. After vSAN is enabled, you can re-enable vSphere HA.
Table 11-2 describes the vSphere HA networking differences between clusters where vSAN is enabled and is not enabled.
Table 11-2 Network Differences in vSAN and non-vSAN Clusters
Factor |
vSAN Is Enabled |
vSAN Is Not Enabled |
---|---|---|
Network used by vSphere HA |
vSAN network |
Management network |
Heartbeat datastores |
Any datastore, other than a vSAN datastore, that is mounted to multiple hosts in the cluster |
Any datastore that is mounted to multiple hosts in the cluster |
Host isolation criteria |
Isolation addresses not pingable and vSAN storage network inaccessible |
Isolation addresses not pingable and management network inaccessible |
You must account for the vSAN’s rule set’s Primary Level of Failures to Tolerate setting when configuring the vSphere HA admission control policy. The vSAN primary level of failures must not be lower than the capacity reserved by the vSphere HA admission control setting. If vSphere HA reserves less capacity, failover activity might be unpredictable. For example, for an eight-host cluster, if you set the vSphere HA admission control to more than 25% of the cluster resources, then you should not set the vSAN rule’s Primary Level of Failures to Tolerate setting higher than two hosts.
In response to events involving the failure of multiple hosts in a cluster where vSphere HA and vSAN are enabled, vSphere HA may not be able to restart some virtual machines where the most recent copy of an object is inaccessible. For example, consider the following scenario:
In a three-host cluster with vSAN and vSphere HA enabled, two hosts fail.
A VM continues to run on the third host.
The final host fails.
The first two hosts are recovered.
vSphere HA cannot restart the VM because the most recent copy of its object is on the third host, which is still unavailable.
Capacity can be reserved for failover in the vSphere HA admission control policies. Such a reservation must be coordinated with the vSAN policy Primary Level of Failures to Tolerate. The HA reserved capacity cannot be higher than the vSAN Primary Level of Failures to Tolerate setting.
For example, if you set vSAN Primary Level of Failures to Tolerate to 1, the HA admission control policy must reserve resources equal to those of one host. If you set the vSAN Primary Level of Failures to Tolerate to 2, the HA admission control policy must reserve resources equal to those of two ESXi hosts.
You can use the following procedure to disable vSAN for a host cluster, which causes all virtual machines located on the vSAN datastore to become inaccessible:
Step 1. In the vSphere Client, select the cluster in the inventory pane.
Step 2. Verify that the host in the cluster is in Maintenance Mode.
Step 3. Select Configure > vSAN > Services.
Step 4. Click Turn Off vSAN.
Step 5. In the dialog box that appears, confirm your selection.
Note
If you intend to use virtual machines while vSAN is disabled, make sure you migrate the virtual machines from a vSAN datastore to another datastore before disabling the vSAN cluster.
To shut down an entire vSAN cluster prior to performing some maintenance activities, vSAN does not have to be disabled. The following procedure details how you can shut down a vSAN cluster:
Step 1. Power off all virtual machines in the vSAN cluster except for the vCenter Server, if it is running in the cluster.
Step 2. In the vSphere Client, select the cluster and navigate to Monitor > vSAN > Resyncing Objects.
Step 3. When all resynchronization tasks are complete, on the Configure tab, turn off DRS and HA.
Step 4. On each host, use the following command to disable cluster member updates:
esxcfg-advcfg -s 1 /VSAN/IgnoreClusterMemberListUpdates
Step 5. If vCenter Server runs in the vSAN cluster, shut it down. (This makes the vSphere Client unavailable.)
Step 6. On each host, use the following command to place the hosts in Maintenance Mode with no data migration:
esxcli system maintenanceMode set -e true -m noAction
Step 7. Shut down each host.
Note
When you plan to shut down a vSAN cluster, you do not need to disable vSAN on the cluster.
After you perform maintenance activities, you can restart a vSAN cluster by using the following procedure:
Step 1. Power on the hosts.
Step 2. Use the hosts’ consoles to monitor the ESXi startup
Step 3. Optionally, use a web browser to connect directly to the ESXi host client to monitor the host’s status, events, and logs. You can ignore misconfiguration status messages that appear temporarily when fewer than three hosts have come online and joined the cluster.
Step 4. On each host, use the following commands to exit Maintenance Mode and to ensure that each host is available in the cluster:
esxcli system maintenanceMode set -e false
esxcli vsan cluster get
Step 5. Restart the vCenter Server VM.
Step 6. On each host, use the following command to re-enable updates:
esxcfg-advcfg -s 0 /VSAN/IgnoreClusterMemberListUpdates
Step 7. In the vSphere Client, select the vSAN cluster in the inventory pane.
Step 8. On the Configure tab, re-enable DRS and HA.
You can now start virtual machines in the cluster and monitor the vSAN health service.
You can simultaneously deploy a VCSA and create a vSAN cluster by using the vCenter Server installer and following these steps:
Step 1. Create a single-host vSAN cluster.
Step 2. Place the vCenter Server on the host in the cluster.
Step 3. Choose Install on a new vSAN cluster containing the target host.
This process deploys a one-host vSAN cluster. After the deployment, you can use the vSphere Client to configure the vSAN cluster and add additional nodes to the cluster.
You can expand a vSAN cluster by adding to the cluster ESXi hosts with storage. Keep in mind that ESXi hosts without local storage can also be added to a vSAN cluster. You can use the following procedure to expand a vSAN cluster by adding hosts:
Step 1. In the vSphere Client, right-click a cluster in the inventory pane and select Add Hosts
Step 2. Using the wizard, add hosts using one of the following options:
New Hosts: Provide the host name and credentials.
Existing Hosts: Select a host in the inventory that is not yet in the cluster.
Step 3. Complete the wizard and click Finish on the final page.
You can also use the following procedure to move multiple existing ESXi hosts into a vSAN cluster by using host profiles:
Step 1. In the vSphere Client, navigate to Host Profiles.
Step 2. Click the Extract Profile from a Host icon.
Step 3. Select a host in the vSAN cluster that you want to use as the reference host and click Next.
Step 4. Provide a name for the new profile and click Next.
Step 5. On the next wizard page, click Finish.
Step 6. In the Host Profiles list, select the new host profile and attach multiple hosts to the profile.
Step 7. Click the Attach/Detach Hosts and Clusters to a Host Profile icon.
Step 8. Detach the reference vSAN host from the host profile.
Step 9. In the Host Profiles list, select the new host profile and click the Check Host Profile Compliance icon.
Step 10. Select the Monitor > Compliance.
Step 11. Right-click the host and select All vCenter Actions > Host Profiles > Remediate.
Step 12. When prompted, provide appropriate input parameters for each host and click Next.
Step 13. Review the remediation tasks and click Finish.
The hosts and their resources are now part of the vSAN cluster.
You can use the following procedure to add hosts to a vSAN cluster by using Quickstart:
Step 1. Verify that no network configuration that was previously performed through the Quickstart workflow has been modified from outside the Quickstart workflow.
Step 1. In the vSphere Client, select the vSAN cluster in the inventory and click Configure > Configuration > Quickstart.
Step 2. Click Add hosts > Launch.
Step 3. Use the wizard to provide information for new hosts or to select existing hosts from the inventory.
Step 4. Complete the wizard and click Finish on the last page.
Step 5. Click Cluster Configuration > Launch.
Step 6. Provide networking settings for the new hosts.
Step 7. On the Claim Disks page, select disks on each new host.
Step 8. On the Create Fault Domains page, move the new hosts into their corresponding fault domains.
Step 9. Complete the wizard and click Finish.
Note
When adding a host to a vSAN cluster by using Quickstart, the vCenter Server must not be running on the host.
Before shutting down, rebooting, or disconnecting a host that is a member of a vSAN cluster, you must put the ESXi host in Maintenance Mode. Consider the following guidelines for using Maintenance Mode for vSAN cluster member hosts:
When entering host Maintenance Mode, you must select a data evacuation mode, such as Ensure Accessibility or Full Data Migration.
When a vSAN cluster member host enters Maintenance Mode, the cluster capacity is automatically reduced.
Each impacted virtual machine may have compute resources, storage resources, or both on the host entering Maintenance Mode.
Ensure Accessibility Mode, which is faster than Full Data Migration Mode, migrates only the components from the host that are essential for running the virtual machines. It does not reprotect your data. When in this mode, if you encounter a failure, the availability of your virtual machine is affected, and you might experience unexpected data loss.
When you select Full Data Migration Mode, your data is automatically reprotected against a failure (if the resources are available and Primary Level of Failures to Tolerate is set to 1 or more). In this mode, your virtual machines can tolerate failures, even during planned maintenance.
When working with a three-host cluster, you cannot place a server in Maintenance Mode with Full Data Migration Mode.
Prior to placing a vSAN cluster member host in Maintenance Mode, you must do the following:
If using Full Data Migration Mode, ensure that the cluster has enough hosts and available capacity to meet the requirements of the Primary Level of Failures to Tolerate policy.
Verify that remaining hosts have enough flash capacity to meet any flash read cache reservations. To analyze this, you can run the following VMware Ruby vSphere Console (RVC) command:
vsan.whatif_host_failures
Verify that the remaining hosts have devices with sufficient capacity to handle stripe width policy requirements, if selected.
Make sure that you have enough free capacity on the remaining hosts to handle the data that must be migrated from the host entering Maintenance Mode.
You can use the Confirm Maintenance Mode dialog box to determine how much data will be moved, the number of objects that will become noncompliant or inaccessible, and whether sufficient capacity is available to perform the operation. You can use the Data Migration Pre-check button to determine the impact of data migration options when placing a host into Maintenance Mode or removing it from the cluster.
To place a vSAN cluster member host in Maintenance Mode, you can use the following procedure:
Step 1. In the vSphere Client, select the cluster in the inventory pane.
Step 2. Optionally, use the following steps to run Data Migration Pre-check:
Click Data Migration Pre-check.
Select a host and a data migration option and click Pre-check.
View the test results and decide whether to proceed.
Step 3. Right-click the host and select Maintenance Mode > Enter Maintenance Mode.
Step 4. Select one of the following data evacuation modes:
Ensure Accessibility: If hosts are powered off or removed from a vSAN cluster, vSAN makes sure the virtual machines on the ESXi host that is removed can still run those virtual machines. This moves some of the virtual machine data off the vSAN cluster, but replica data remains. If you have a three-host cluster, this is the only evacuation mode available.
Full Data Migration: As its name implies, this mode moves all the VM data to other ESXi hosts in the cluster. This option makes sense if you are removing the host from the cluster permanently. If a virtual machine has data on the host and that data is not migrated off, the host cannot enter this mode.
No Data Migration: If this option is selected, vSAN does not move any data from this ESXi host.
Click OK.
Fault domains provide additional protection against outage in the event of a rack or blade chassis failure. A vSAN fault domain contains at least one vSAN host, depending on the physical location of the host. With fault domains, vSAN can withstand rack, blade chassis, host, disk, or network failure within one fault domain, as the replica and witness data are stored in a different fault domain.
You can use the following procedure to create a new fault domain in a vSAN cluster:
Step 1. In the vSphere Client, examine each host in a vSAN cluster.
Step 2. Verify that each host is running ESXi 6.0 or later (to support fault domains) and is online.
Step 3. Select the vSAN cluster in the inventory pane and click Configure > vSAN > Fault Domains.
Step 4. Click the Add (plus sign) icon.
Step 5. In the wizard, provide a name for the fault domain.
Step 6. Select one or more hosts to add to the fault domain.
Step 7. Click Create.
You can use the vSphere Client to add hosts to an existing fault domain by selecting Configure > vSAN > Fault Domains and dragging the host to the appropriate fault domain. Likewise, you can drag a host out of a fault domain to remove the host from the fault domain and create a single-host fault domain.
A vSAN stretched cluster extends across two physical data center locations to provide availability in the event of site failure as well as provide load balancing between sites. With a stretched vSAN cluster, both sites are active, and if either site fails, vSAN uses storage on the site that is still up. One site must be designated as the preferred site, which makes the other site the secondary, or nonpreferred, site.
You can use the following procedure to leverage Quickstart to create a stretched cluster across two sites:
Step 1. Ensure that the following prerequisites are met:
A host is deployed outside any cluster for the witness host.
ESXi 6.0 Update 2 or later is used on each host.
The hosts in the cluster do not have any existing vSAN or networking configuration.
Step 2. Click Configure > Configuration > Quickstart.
Step 3. Click Cluster Configuration > Edit.
Step 4. In the wizard, provide a cluster name, enable vSAN, and optionally enable other features, such as DRS or vSphere HA.
Step 5. Click Finish.
Step 6. Click Add Hosts > Add.
Step 7. In the wizard, provide information for new hosts or select existing hosts from the inventory. Click Finish.
Step 8. Click Cluster Configuration > Configure.
Step 9. In the wizard, configure the following:
Configure settings for distributed switch port groups, physical adapters, and the IP configuration associated with vMotion and storage.
Set vSAN Deployment Type to Stretched Cluster.
On the Claim Disk page, select disks on each host for cache and capacity.
On the Create Fault Domains page, define fault domains for the hosts in the preferred site and the secondary site.
On the Select Witness Host page, select a host to use as a witness host. This host cannot be part of the cluster and can have only one VMkernel adapter configured for vSAN data traffic.
On the Claim Disks for Witness Host page, select disks on the witness host for cache and capacity.
On the Ready to Complete page, verify the cluster settings and click Finish.
When creating a vSAN stretched cluster, DRS must be enabled on the cluster. There are also several DRS requirements for stretched vSAN clusters:
Two host groups must be created: one for the preferred site and another for the secondary site.
Two VM groups must be created: one for the preferred site VMs and one for the VMs on the secondary site.
Two VM-to-host affinity rules must be created for the VMs on the preferred site and VMs on the secondary site.
VM-to-host affinity rules must be used to define the initial placement of virtual machines on ESXi hosts in the cluster.
In addition to the DRS requirements, there are also HA requirements for stretched vSAN clusters:
HA must be enabled.
HA rules should allow the VM-to-host affinity rules in the event of a failover.
HA datastore heartbeats should be disabled.
vSAN has numerous requirements for implementing stretched clusters:
Stretched clusters must use on-disk format Version 2.0 or higher. If your vSAN cluster is not using on-disk format Version 2.0, it must be upgraded before you configure the stretched vSAN cluster.
Failures to Tolerate must be set to 1.
Symmetric Multiprocessing Fault Tolerance (SMP-FT) VMs are supported only when PFFT is at 0 and Data Locality is either Preferred or Secondary. SMP-FT VMs with PFFT set to 1 or higher are not supported.
If hosts are disconnected or fail in a not responding state, the witness cannot be added or removed.
Adding ESXi hosts via esxcli commands on stretched clusters is not supported.
You can use the following procedure to create a disk group on a vSAN cluster member host:
Step 1. In the vSphere Client, select the cluster in the inventory pane and navigate to Configure > vSAN > Disk Management.
Step 2. Select the host and click Create Disk Group.
Step 3. Select the flash device to be used for the cache.
Step 4. Select the type of capacity disks to use (HDD for hybrid or Flash for all-flash).
Step 5. Select the devices you want to use for capacity.
Step 6. Click Create or OK.
You can use the following procedure to claim storage devices for a vSAN cluster:
Step 1. In the vSphere Client, select the cluster in the inventory pane and navigate to Configure > vSAN > Disk Management > Claim Unused Disks.
Step 2. Select a flash device to be used for the cache and click Claim for the cache tier.
Step 3. Select one or more devices (HDD for hybrid or Flash for all-flash) to be used as capacity and click Claim for the capacity tier.
Step 4. Click Create or OK.
To verify that the proper role (cache or capacity) has been assigned to each device in an all-flash disk group, examine the Disk Role column at the bottom of the Disk Management page. If the vSAN cluster is set to claim disks in manual mode, you can use the following procedure to add additional local devices to an existing disk group. The additional devices must be the same type (flash or HDD) as existing devices in the disk group:
Step 1. In the vSphere Cluster, select the vSAN cluster in the inventory pane and navigate to Configure > vSAN > Disk Management.
Step 2. Select the disk group and click Add Disks.
Step 3. Select the device and click Add.
Note
If you add a used device that contains residual data or partition information, you must first clean the device. For example, you can run the RVC command host_wipe_vsan_disks.
You can use the following procedure to remove specific devices from a disk group or remove an entire disk group. However, you should typically do so only when you are upgrading a device, replacing a failed device, or removing a cache device. Deleting a disk group permanently deletes the data stored on the devices. Removing one flash cache device or all capacity devices from a disk group removes the entire disk group. Follow these steps to remove specific devices from a disk group or remove an entire disk group:
Step 1. In the vSphere Cluster, select the vSAN cluster in the inventory pane.
Step 2. Click Configure > vSAN > Disk Management.
Step 3. To remove a disk group, select the disk group, click Remove, and select a data evacuation mode.
Step 4. To remove a device, select the disk group, select the device, click Remove, and select a data evacuation mode.
Step 5. Click Yes or Remove.
If ESXi does not automatically identify your devices as being flash devices, you can use the following procedure to manually mark them as local flash devices. For example, flash devices that are enabled for RAID 0 Mode rather than Passthrough Mode may not be recognized as flash. Marking these devices as local flash makes them available for use as vSAN cache devices. Before starting this procedure, you should verify that the device is local and not in use.
Step 1. In the vSphere Cluster, select the vSAN cluster in the inventory pane and navigate to Configure > vSAN > Disk Management.
Step 2. Select a host to view the list of available devices.
Step 3. In the Show drop-down menu, select Not in Use.
Step 4. Select one or more devices from the list and click Mark as Flash Disk.
Step 5. Click Yes.
Likewise, you can use this procedure in other scenarios where you want to change how a device is identified. In step 4, you can choose Mark as HDD Disk, Mark as Local Disk, or Mark as Remote.
To increase space efficiency in a vSAN cluster, you can use SCSI Unmap, deduplication, compression, RAID5 erasure coding, and RAID 6 erasure encoding.
Unmap capability is disabled by default. To enable SCSI Unmap on a vSAN cluster, use the RVC command vsan.unmap_support --enable.
Note
Unmap capability is disabled by default. When you enable Unmap on a vSAN cluster, you must power off and then power on all VMs. VMs must use virtual hardware Version 13 or above to perform Unmap operations.
When you enable or disable deduplication and compression, vSAN performs a rolling reformat of every disk group on every host. Depending on the data stored on the vSAN datastore, this process might take a long time. Do not perform such operations frequently. If you plan to disable deduplication and compression, you must first verify that enough physical capacity is available to place your data.
You should consider the following when managing disks in a vSAN cluster where deduplication and compression are enabled:
For efficiency, consider adding a disk group to cluster capacity instead of incrementally adding disks to an existing disk group.
When you add a disk group manually, add all the capacity disks at the same time.
You cannot remove a single disk from a disk group. You must remove the entire disk group in order to make modifications.
A single disk failure causes an entire disk group to fail.
To enable deduplication and compression for an existing vSAN cluster, you can use the following procedure:
Step 1. Verify that the cluster is all-flash.
Step 2. In the vSphere Client, select the cluster in the inventory pane and navigate to Configure > vSAN > Services.
Step 3. Click Edit.
Step 4. Enable Deduplication and Compression.
Step 5. Optionally, select Allow Reduced Redundancy.
Step 6. Click Apply or OK.
When you enable deduplication and compression, vSAN updates the on-disk format of each disk group of the cluster by evacuating data from the disk group, removing the disk group, and re-creating it with a new format. This operation does not require virtual machine migration or DRS. If you choose the Allow Reduced Redundancy option, the virtual machines may continue to keep running even if the cluster does not have enough resources for the disk group to be fully evacuated. In this case, your virtual machines might be at risk of experiencing data loss during the operation.
You can use the vSphere Client to check the storage savings provided by deduplication and compression. To do so, select a cluster, navigate to Monitor > Capacity, and examine Capacity Overview.
To use RAID 5 erasure coding in a vSAN cluster, set the following options:
Set Failure Tolerance Method to RAID-5/6 (Erasure Coding)–Capacity.
Set Primary Level of Failures to Tolerate to 1.
To use RAID 6 erasure coding in a vSAN cluster, set the following options:
Set Failure Tolerance Method to RAID-5/6 (Erasure Coding)–Capacity.
Set Primary Level of Failures to Tolerate to 2.
To use RAID 1, set Failure Tolerance Method to RAID-1 (Mirroring)–Performance.
Note
RAID 5 and RAID 6 erasure coding do not support Primary Level of Failures set to a value higher than 2.
When planning to implement vSAN encryption, you should consider the following:
To provide the encryption keys for the vSAN datastore, you must implement a key management server (KMS) cluster server that is KMIP 1.1 compliant and is in the vSphere compatibility matrices.
You should not deploy the KMS server on the same vSAN datastore that it will help encrypt.
Encryption is CPU intensive. Enable AES-NI in your BIOS.
In a stretched vSAN cluster, the Witness host only stores metadata and does not participate in encryption.
You should establish a policy regarding the encryption of coredumps because they contain sensitive information such as keys for hosts. In the policy, consider the following:
You can use a password when you collect a vm-support bundle.
The password re-encrypts coredumps that use internal keys based on the password.
You can later use the password to decrypt the coredumps in the bundle.
You are responsible for keeping track of the password. It is not saved anywhere in vSphere.
To use encryption in a vSAN datastore, you must add a KMS to the vCenter Server and establish trust with the KMS. You can use the following procedure to add a KMS to vCenter Server:
Step 1. Ensure that the user has the Cryptographer.ManageKeyServers privilege.
Step 2. In the vSphere Client, select the vCenter Server in the inventory pane and navigate to Configure > Key Management Servers.
Step 3. Click Add and specify the following KMS information in the wizard:
For KMS Cluster, select Create New Cluster.
Specify the cluster name, alias, and address (FQDN or IP address).
Specify the port, proxy, and proxy port.
Step 4. Click Add.
Note
Connecting to a KMS through a proxy server that requires a username or password is not supported. Connecting to a KMS by using only an IPv6 address is not supported.
You can use the following procedure to establish a trusted connection for a KMS:
Step 1. In the vSphere Client, select the vCenter Server in the inventory pane and navigate to Configure > Key Management Servers.
Step 2. Select the KMS instance and click Establish Trust with KMS.
Step 3. Select one of the following options, as appropriate for the selected KMS instance:
Root CA Certificate
Certificate
New Certificate Signing Request
Upload Certificate and Private Key
When multiple KMS clusters are used, you can use the following procedure to identify a default KMS cluster:
Step 1. In the vSphere Client, select the vCenter Server in the inventory pane and navigate to Configure > Key Management Servers.
Step 2. Select the KMS cluster and click Set KMS Cluster as Default.
Step 3. Click Yes.
Step 4. Verify that the word default appears next to the cluster name.
You can make vCenter Server trust the KMS by using the following procedure:
Step 1. In the vSphere Client, select the vCenter Server in the inventory pane and navigate to Configure > Key Management Servers.
Step 2. Select the KMS instance and do one of the following:
Select All Actions > Refresh KMS Certificate > Trust.
Select All Actions > Upload KMS Certificate > Upload File.
If you want to enable encryption on a vSAN cluster, you need the following privileges:
Host.Inventory.EditCluster
Cryptographer.ManageEncryptionPolicy
Cryptographer.ManageKMS
Cryptographer.ManageKeys
You can use the following procedure to enable encryption on a vSAN cluster:
Step 1. In the vSphere Client, select the cluster in the inventory pane and navigate to vSAN > Services.
Step 2. Click the Edit button.
Step 3. In the vSAN Services dialog, enable Encryption and select a KMS cluster.
Step 4. Optionally, select the Erase Disks Before Use checkbox, based on the following:
If this is a new cluster with no virtual machines, you can deselect the checkbox.
If it is an existing cluster with unwanted data, select the checkbox, which increases the processing time for each disk.
Step 5. Click Apply.
To generate new encryption keys, you can use the following procedure:
Step 1. Log on to the vSphere Client as a user with Host.Inventory.EditCluster and Cryptographer.ManageKeys privileges.
Step 2. In the vSphere Client, select the cluster in the inventory pane and navigate to Configure > vSAN > Services.
Step 3. Click Generate New Encryption Keys.
Step 4. To generate a new KEK, click Apply. Each host’s DEK is re-encrypted with the new KEK.
Step 5. Optionally, select Also Re-encrypt All Data on the Storage Using New Keys.
Step 6. Optionally, select the Allow Reduced Redundancy checkbox, which may put your data at risk during the disk reformatting operation.
If a host member of a vSAN cluster that uses encryption has an error, the resulting coredump is encrypted. Coredumps that are included in the vm-support package are also encrypted.
Virtual machine performance and availability requirements can be defined for vSAN, if required. Once virtual machines are created, their storage policy is enforced on the vSAN datastore. Underlying components of virtual disks are spread across the vSAN datastore to meet the requirements defined in the storage policy. Storage providers provide information about the physical storage to vSAN to assist with placement and monitoring.
The following procedure can be used to create a vSAN storage policy:
Step 1. In the vSphere Client, go to Policies and Profiles > VM Storage Policies.
Step 2. Click on the Create a New VM Storage Policy icon.
Step 3. On the Name and Description page, select an appropriate vCenter Server, enter a name and description for the policy, and click Next.
Step 4. On the Policy Structure page, select Enable Rules for “vSAN” Storage and click Next.
Step 5. On the vSAN page, set the policy:
On the Availability tab, set Site Disaster Tolerance and Failures to Tolerate.
On the Advanced Policy Rules tab, set Disk Stripes per Object and IOPS Limit.
On the Tags tab, click Add Tag Rule and configure its options.
Click Next.
Step 6. On the Storage Compatibility page, review the list of compatible data-stores and click Next.
Step 7. On the Review and Finish page, review all the settings and click Finish.
vSAN datastore default policies can be changed, if desired, using the following procedure:
Step 1. In the vSphere Client storage inventory view, right-click the vSAN data-stores and select Configure.
Step 2. Select General, click Edit next to the default storage policy, and select a storage policy to be defined as the new default.
Step 3. Click OK.
vSAN 6.7 and above register one storage provider for all vSAN clusters managed by vCenter. To access the storage providers, use the URL https://VCfqdn:VCport/vsanHealth/vsanvp/version.xml.
To view the vSAN storage providers, in the vSphere client, select a vCenter Server and navigate to Configure > Storage Providers.
Each ESXi host has a vSAN storage provider, but only one is active. Storage providers on other ESXi hosts are in standby. If an ESXi host with an active storage provider fails, a storage provider from another host activates.
You can use the following procedure to configure (enable) the vSAN file service on a vSAN cluster, which enables you to create file shares:
Step 1. Address the following prerequisites:
Identify a set of available IPv4 addresses, preferably one per host (for best performance), that are from the same subnet and are part of the forward and reverse lookup zones in the DNS server.
Create a dedicated distributed port group.
Ensure that vDS 6.6.0 or higher is in use.
Promiscuous Mode and forged transmits are enabled during file services configuration. If an NSX-based network is used, you must provide similar settings.
Step 2. In the vSphere Client, select the vSAN cluster and select Configure > vSAN > Services.
Step 3. In the File Service row, click Enable.
Step 4. In the wizard, click Next.
Step 5. On the next page, select either of the following options:
Automatic: Automatically searches for and downloads the OVF
Manual: Allows you to manually select an OVF and associated files (CERT, VMDK, and so on)
Step 6. Continue the wizard to provide file service domain, DNS, and networking information.
Step 7. On the IP Pool page, enter the set of available IPv4 addresses and assign one as the primary IP address. To simplify this process, you can use the Auto Fill or Look Up DNS options.
Note
vSAN stretched clusters do not support the file service.
You can use the following procedure to create a vSAN file service:
Step 1. In the vSphere Client, select the vSAN cluster in the inventory pane and navigate to Configure > vSAN > File Service Shares.
Step 2. Click Add.
Step 3. In the wizard, enter the following general information:
Protocol: Select either NFS Version 3 or NFS Version 4.1.
Name: Specify a name.
Storage Policy: Select the vSAN default storage policy.
Storage space quotas: Set the share warning threshold and the share hard quota.
Labels: Specify up to 50 labels (key-value pairs) per share, a label key (up to 250 characters), and a label value (fewer than 1000 characters).
Click Next.
Step 4. In the Net Access Control page, select one of the following options:
No Access: Use this option to prevent access to the file share.
Allow Access from Any IP: Use this to allow access from any IP address.
Customize Net Access: Use this to control whether specific IP addresses can access, read, or modify the file share. You can configure Root Squash based on IP address.
Click Next.
Step 5. In the Review page, click Finish.
This section provides information on managing datastores in a vSphere 7.0 environment.
You can set up VMFS datastores on any SCSI-based storage device that is discovered by a host, such as a Fibre Channel device, an iSCSI device, or a local device. To view a host’s SCSI devices, you can use the following procedure:
Step 1. In the vSphere Client, select an ESXi host in the inventory pane and navigate to Configure > Storage > Storage Adapters.
Step 2. Select a storage adapter.
Step 3. Optionally, click the Rescan Adapter or Rescan Storage button.
Step 4. In the details pane, select the Devices tab and examine the details for each discovered SCSI device, including type, capacity, and assigned datastores.
Step 5. Optionally, to manipulate a specific device, select the device and click the Refresh, Attach, or Detach button.
To create a VMFS 6 datastore on a SCSI device, you can use the following procedure:
Step 1. In the vSphere Client, right-click a host in the inventory pane and select Storage > New Datastore.
Step 2. For datastore type, select VMFS and click Next.
Step 3. Provide a name for the datastore, select an available SCSI device, and click Next.
Step 4. Select VMFS 6 and click Next.
Step 5. Keep the default Partition Configuration setting Use All Available Partitions. Alternatively, set the datastore size, block size, space reclamation granularity, and space reclamation priority.
Step 6. Click Next.
Step 7. On the Ready to Complete page, click Finish.
You can increase the size of a VMFS datastore by adding an extent or by expanding the datastore within its own extent. A VMFS datastore can span multiple devices. Adding an extent to a VMFS datastore means adding a storage device (LUN) to the datastore. A spanned VMFS datastore can use any extent at any time. It does not require filling up a specific extent before using the next one.
A datastore is expandable when the backing storage device has free space immediately after the datastore extent. You can use the following procedure to increase the size of a datastore:
Step 1. In the vSphere Client, right-click the datastore in the inventory pane and select Increase Datastore Capacity.
Step 2. Select a device from the list of storage devices, based on the following.
To expand the datastore, select a storage device whose Expandable column contains YES.
To add an extent to the datastore, select a storage device whose Expandable column contains NO.
Step 3. Review the available configurations in the partition layout.
Step 4. In the menu, select one of the following available configuration options, depending on your previous selections:
Use Free Space to Expand the Datastore: Select this option to expand the existing datastore and disk partition to use the adjacent disk space.
Use Free Space: Select this option to deploy an extent in the remaining free space.
Use All Available Partitions: Select this option to reformat a disk and deploy an extent using the entire disk. (This option is available only for non-blank disks.)
Step 5. Set the capacity. (The minimum extent size is 1.3 GB.) Click Next.
Step 6. Click Finish.
Note
If a shared datastore becomes 100% full and has powered-on virtual machines, you can increase the datastore capacity—but only from the host where the powered-on virtual machines are registered.
Each VMFS datastore is assigned a universally unique ID (UUID). A storage device operation, such as a LUN snapshot, LUN replication, or a LUN ID change, might produce a copy of the original datastore such that both the original and a copy device contain a VMFS datastore with identical signatures (UUID). When ESXi detects a VMFS datastore copy, it allows you to mount it with the original UUID or mount it with a new UUID. The process of changing the UUID is called resignaturing.
To allow a host to use the original datastore and the copy, you can choose to resignature the copy. If the host will only access the copy, you could choose to mount the copy without resignaturing.
You should consider the following:
When resignaturing a datastore, ESXi assigns a new UUID to the copy, mounts the copy as a datastore that is distinct from the original, and updates all corresponding UUID references in the virtual machine configuration files.
Datastore resignaturing is irreversible.
After resignaturing, the storage device is no longer treated as a replica.
A spanned datastore can be resignatured only if all its extents are online.
The resignaturing process is fault tolerant. If the process is interrupted, you can resume it later.
You can mount the new VMFS datastore without risk of its UUID conflicting with UUIDs of any other datastore from the hierarchy of device snapshots.
To mount a VMFS datastore copy on an ESX host, you can use the following procedure:
Step 1. In the vSphere Client, select the host in the inventory page and navigate to Configure > Storage Adapters.
Step 2. Rescan storage.
Step 3. Unmount the original VMFS datastore, which has the same UUID as the VMFS copy.
Step 4. Right-click the host and select Storage > New Datastore.
Step 5. Select VMFS as the datastore type.
Step 6. Enter the datastore name and placement (if necessary).
Step 7. In the list of storage devices, select the device that contains the VMFS copy.
Step 8. Choose to mount the datastore and select one of the following options:
Mount Options > Assign a New Signature
Mount Options > Keep Existing Signature
Step 9. Click Finish.
Beginning with vSphere 7.0, you can use clustered virtual disks (VMDKs) on a VMFS 6 datastore to support Windows Server Failover Clustering (WSFC). To enable support for clustered VMDK, you should set Clustered VMDK Support to Yes when creating a VMFS 6 datastore. The datastores must only be used by ESXi 7.0 or later and must be managed by the same vCenter Server 7.0 or later. For a datastore that supports clustered VMDK, you must also enable clustered VMDK. In the vSphere Client, select the VMFS 6 datastore in the inventory pane, and set Datastore Capabilities > Clustered VMDK to Enable. After enabling this setting, you can place the clustered virtual disks on the datastore. To disable the setting, you need to first power off the virtual machines with clustered virtual disks.
Table 11-3 provides details for other administration operations that you can perform on VMFS datastores.
Table 11-3 VMFS Datastore Operations
Operation |
Steps |
Notes |
---|---|---|
Change datastore name |
|
If the host is managed by vCenter Server, you must rename the datastore from vCenter Server, not from the vSphere Host Client. You can successfully rename a datastore that has running virtual machines. |
Unmount datastore |
|
When you unmount a datastore, it remains intact but can no longer be seen from the specified hosts. Do not perform other configuration operations on the datastore during the unmount operation. Ensure that the datastore is not used by vSphere HA heartbeating (which could trigger host failure event and restart virtual machines). |
Mount datastore |
|
A VMFS datastore that is unmounted from all hosts is marked as inactive. You can mount the unmounted VMFS datastore. If you unmount an NFS or a vVols datastore from all hosts, the datastore disappears from the inventory. To mount the NFS or vVols datastore that has been removed from the inventory, use the New Datastore wizard. |
Remove datastore |
|
Deleting a datastore permanently destroys it and all its data, including virtual machine files. You are not required to unmount the datastore prior to deletion, but you should. |
In the vSphere Client, you can use the Datastore Browser to examine and manage the datastore contents. To get started, right-click the datastore in the inventory pane and select Browse Files. In the Datastore browser, select any of the options listed in Table 11-4.
Table 11-4 Datastore Browser Options
Option |
Description |
---|---|
Upload Files |
Upload a local file to the datastore. |
Upload Folder |
Upload a local folder to the datastore. |
Download |
Download a file from the datastore to the local machine. |
New Folder |
Create a folder on the datastore. |
Copy to |
Copy selected folders or files to a new location on the datastore or on another datastore. |
Move to |
Move selected folders or files to a new location on the datastore or on another datastore. |
Rename to |
Rename selected files. |
Delete |
Delete selected folders or files. |
Inflate |
Convert a selected thin virtual disk to thick. |
When you use the vSphere Client to perform VMFS datastore operations, vCenter Server uses default storage protection filters. The filters help you avoid data corruption by displaying only the storage devices that are suitable for an operation. In the rare scenario in which you want to turn off the storage filters, you can do so using the following procedure:
Step 1. In the vSphere Client, select the vCenter Server instance in the inventory pane and navigate to Configure > Settings > Advanced Settings > Edit Settings.
Step 2. Specify one of the filter names described in Table 11-5 and set its value to False.
Table 11-5 Storage Filters
Filter |
Description |
---|---|
config.vpxd.filter.vmfsFilter (VMFS filter) |
Hides storage devices (LUNs) that are used by a VMFS datastore on any host managed by vCenter Server. |
config.vpxd.filter.rdmFilter (RDM filter) |
Hides storage devices (LUNs) that are used by an RDM on any host managed by vCenter Server. |
config.vpxd.filter.sameHostsAndTransportsFilter (Same Hosts and Transports filter) |
Hides storage devices (LUNs) that are ineligible for use as VMFS datastore extents because of incompatibility with the selected datastore. Hides LUNs that are not exposed to all hosts that share the original datastore. Hides LUNs that use a storage type (such as Fibre Channel, iSCSI, or local) that is different from the original datastore. |
config.vpxd.filter.hostRescanFilter (Host Rescan filter) |
Automatically rescans and updates VMFS datastores following datastore management operations. If you present a new LUN to a host or a cluster, the hosts automatically perform a rescan, regardless of this setting. |
Note
You should consult the VMware support team prior to changing device filters.
You can use the following procedure to add an RDM to a virtual machine:
Step 1. In the vSphere Client, open the settings for a virtual machine.
Step 2. Click Add New Device and select RDM Disk.
Step 3. Select a LUN and click OK.
Step 4. Click the New Hard Disk triangle to expand the RDM properties.
Step 5. Select a datastore to place the RDM, which can be the same as or different from where the virtual machine configuration file resides.
Step 6. Select either Virtual Compatibility Mode or Physical Compatibility Mode.
Step 7. If you selected Virtual Compatibility Mode, select a disk mode: Dependent, Independent–Persistent, or Independent–Nonpersistent.
Step 8. Click OK.
You can use the following procedure to manage paths for the storage devices used by RDMs:
Step 1. In the vSphere Client, right-click the virtual machine in the inventory pane and select Edit Settings.
Step 2. Select Virtual Hardware > Hard Disk.
Step 3. Click the device ID that appears next to Physical LUN to open the Edit Multipathing Policies dialog box.
Step 4. Use the Edit Multipathing Policies dialog box to enable or disable paths, set multipathing policy, and specify the preferred path.
If the guest OS in your virtual machine is known to have issues using the SCSI INQUIRY data cached by ESXi, you can either modify the virtual machine or the host to ignore the cached data. To modify the virtual machine, you can edit its VMX file and add the following parameter, where scsiX:Y represents the SCSI device:
scsiX:Y.ignoreDeviceInquiryCache = "true"
To modify the host, you can use the following command, where deviceID is the device ID of the SCSI device:
esxcli storage core device inquirycache set --device deviceID --ignore true
NFS Version 3 and Version 4.1 are supported by ESXi, which uses a different client for each protocol. When mounting NFS datastores on an ESXi host, the following best practices should be observed:
On ESXi, the NFS Version 3 and NFS Version 4.1 clients use different locking mechanisms. You cannot use different NFS versions to mount the same data-store on multiple hosts.
ESXi hosts can make use of both NFS Version 3 and Version 4.1 if the previous rule is observed.
ESXi hosts cannot automatically upgrade NFS Version 3 to NFS Version 4.1.
NFS datastores must have folders with identical names mounted on all ESXi hosts, or functions such as vMotion may not work.
If an NFS device does not support internationalization, you should use ASCII characters only.
How you configure an NFS storage device to use with VMware varies by vendor, so you should always refer to the vendor documentation for specifics.
The following is the procedure to configure an NFS server (but refer to vendor documentation for specifics on how to carry out this procedure):
Step 1. Use the VMware Hardware Compatibility List to ensure that the NFS server is compatible. Pay attention to the ESXi version, the NFS server version, and the server firmware version.
Step 2. Configure the NFS volume and export it (by adding it to /etc/exports) using the following details:
NFS Version 3 or Version NFS 4.1 (only one protocol per share)
NFS over TCP
Step 3. For NFS Version 3 or non-Kerberos NFS Version 4.1, ensure that each host has root access to the volume. The typical method for this is to use the no_root_squash option.
Step 4. If you are using Kerberos, ensure that the NFS exports provide full access to the Kerberos user. In addition, if you are going to use Kerberos with NFS Version 4.1, you need to enable either AES256-CTS-HMAC-SHA1-96 or AES128-CTS-HMAC-SHA1-96 on the NFS storage device.
To prepare an ESXi host to use NFS, you must configure a VMkernel virtual adapter to carry NFS storage traffic. If you are using Kerberos and NFS Version 4.1, you should take the following additional steps:
Step 1. Ensure that the DNS settings on the ESXi hosts are pointing to the DNS server that is used for DNS records for Kerberos Key Distribution Center (KDC). This will most likely be the Active Directory server if that is being used for name resolution.
Step 2. Configure NTP because Kerberos is sensitive to time drift.
Step 3. Configure Active Directory for Kerberos.
To create (mount) an NFS datastore in vSphere, you need the IP address or DNS name of the NFS server as well as the path to the share (folder name). When using Kerberos, you need to configure the ESXi hosts for Kerberos authentication prior to creating the NFS datastore.
Note
Multiple IP addresses or DNS names can be used with NFS Version 4.1 multipathing.
You can use the following procedure to create an NFS datastore:
Step 1. In the vSphere Client, right-click a data center, cluster, or ESXi host object in the inventory pane and select Storage > New Datastore.
Step 2. Select NFS as the new datastore type.
Step 3. Select the correct NFS version (Version 3 or Version 4.1). Be sure to use the same version on all ESXi hosts that are going to mount this datastore.
Step 4. Define the datastore name (with a maximum of 42 characters).
Step 5. Provide the appropriate path for the folder to mount, which should start with a forward slash (/).
Step 6. Set Server to the appropriate IPv4 address, IPv6 address, or server name.
Step 7. Optionally, select the Mount NFS Read Only checkbox. (This can only be set when mounting an NFS device. To change it later, you must unmount and remount the datastore from the hosts.)
Step 8. If using Kerberos, select Kerberos and define the Kerberos model as one of the following:
Use Kerberos for Authentication Only (krb5): This method supports identity verification only.
Use Kerberos for Authentication and Data Integrity (krb5i): This method supports identity verification and also ensures that data packets have not been modified or tampered with.
Step 9. If you selected a cluster or a data center object in step 1, then select the ESXi hosts to mount this datastore.
Step 10. Verify the configuration and click Finish.
To rename or unmount an NFS datastore, you can use the same procedure as described for VMFS datastores in Table 11-3. To remove an NFS datastore from the vSphere inventory, you should unmount it from every host.
This section provides details on configuring and managing Storage DRS and Storage I/O Control (SIOC).
To create a datastore cluster using the vSphere Client, you can right-click on a data center in the inventory pane, select New Datastore Cluster, and complete the wizard. You can use the following procedure to enable Storage DRS (SDRS) in a data-store cluster:
Step 1. In the vSphere Client, select the datastore cluster in the inventory pane and navigate to Configure > Services > Storage DRS.
Step 2. Click Edit.
Step 3. Select Turn ON vSphere DRS and click OK.
You can use similar steps to set the SDRS Automation Mode to No Automation, Partially Automated, or Fully Automated. You can set Space Utilization I/O (SDRS Thresholds) Latency. You can select or deselect Enable I/O Metric for SDRS Recommendations. You can configure the advanced options, which are Space Utilization Difference, I/O Load Balancing Invocation Interval, and I/O Imbalance Threshold.
You can add datastores to a datastore cluster by using drag and drop in the vSphere Client. Each datastore can only be attached to hosts with ESXi 5.0 or later. The datastores must not be associated with multiple data centers.
If you want to perform a maintenance activity on an SDRS cluster member data-store or its underlying storage devices, you can place it in Maintenance Mode. (Standalone datastores can be placed in Maintenance Mode.) SDRS has recommendations for migrating the impacted virtual machine files, including virtual disk files. You can let SDRS automatically apply the recommendations, or you can manually make recommendations. To place a datastore in Maintenance Mode using the vSphere Client, right-click the datastore in the inventory pane, select Enter SDRS Maintenance Mode, and optionally apply any recommendations.
The Faults tab displays a list of the disks that cannot be migrated and the reasons.
If SDRS affinity or anti-affinity rules prevent a datastore from entering Maintenance Mode, you can select an option to ignore the rules. To do so, edit the settings of the datastore cluster by selecting SDRS Automation > Advanced Options and setting IgnoreAffinityRulesForMaintenance to 1.
When reviewing each SDRS recommendation on the Storage SDRS tab in the vSphere Client, you can examine the information described in Table 11-6 and use it when deciding which recommendations to apply.
Table 11-6 SDRS Recommendations
Recommendations |
Details |
---|---|
Priority |
Priority level (1–5) of the recommendation. (This is hidden by default.) |
Recommendation |
Recommended action. |
Reason |
Why the action is needed. |
Space Utilization % Before (source) and (destination) |
Percentage of space used on the source and destination datastores before migration. |
Space Utilization % After (source) and (destination) |
Percentage of space used on the source and destination datastores after migration. |
I/O Latency Before (source) |
Value of I/O latency on the source datastore before migration. |
I/O Latency Before (destination) |
Value of I/O latency on the destination datastore before migration. |
You can use the following procedure to override the SDRS datastore cluster automation level per virtual machine:
Step 1. In the vSphere Client, right-click a datastore cluster in the inventory pane and select Edit Settings.
Step 2. Select Virtual Machine Settings.
Step 3. Select one of the following automation levels:
Default (Manual)
Fully Automated
Disabled
Step 4. Optionally select or deselect the Keep VMDKs Together option.
Step 5. Click OK.
You can use the following procedure to create an inter-VM anti-affinity rule (that is, a rule specifying that two or more virtual machines are placed on separate datastores):
Step 1. In the vSphere Client, right-click a datastore cluster in the inventory pane and select Edit Settings.
Step 2. Select Rules > Add.
Step 3. Provide a name and set Type to VM Anti-affinity.
Step 4. Click Add.
Step 5. Click Select Virtual Machine.
Step 6. Select at least two virtual machines and click OK.
Step 7. Click OK to save the rule.
To create an intra-VM anti-affinity rule (that is, a rule which says that virtual disks for a specific virtual machine are placed on separate datastores), you use a similar procedure but set Type to VMDK-Affinity and select the appropriate virtual machine and virtual disks.
Storage I/O Control (SIOC) allows you to prioritize storage access during periods of contention, ensuring that the more critical virtual machines obtain more I/O than less critical VMs. Once SIOC has been enabled on a datastore, ESXi hosts monitor the storage device latency. If the latency exceeds a predetermined threshold, the datastore is determined to be under contention, and the virtual machines that reside on that datastore are assigned I/O resources based on their individual share values. You can enable SIOC as follows:
Step 1. In the vSphere Client, select a datastore in the Storage inventory view and select Configuration > Properties.
Step 2. Click the Enabled checkbox under Storage I/O Control and click Close.
Note
SIOC is enabled automatically on Storage DRS–enabled datastore clusters.
In addition to share values, which are similar to shares defined for CPU and memory, storage I/O limits can be defined on individual virtual machines to limit the number of I/O operations per second (IOPS). By default, just as with CPU and memory resources, there are no limits set for virtual machines. In a virtual machine with more than one virtual disk, limits must be set on all of the virtual disks for that VM. If you do not set a limit on all the virtual disks, the limit won’t be enforced. To view the shares and limits assigned to virtual machines, you can use the vSphere Client. To select a datastore, select the Virtual Machines tab and examine the associated virtual machines. The details for each virtual machine include its respective shares, the IOPS limit, and the percentage of shares for that datastore.
As with CPU and memory shares, SIOC shares establish a relative priority in the event of contention. In the event of storage contention, virtual machines with more shares will observe more disk I/O than will a virtual machine with fewer shares. The following procedure outlines how you configure SIOC shares and limits for virtual machines:
Step 1. In the vSphere Client, right-click a virtual machine in the inventory pane and select Edit Settings.
Step 2. Expand one of the hard disks (for example, Hard disk 1).
Step 3. From the Shares drop-down menu, select High, Normal, Low, or Custom to define the share value.
Step 4. Set the Limit–IOPS drop-down to Low (500), Normal (1000), High (2000), or Custom (and enter a custom value for the IOPS limit).
Step 5. Click OK to save your changes.
To view the impact of shares on individual datastores, in the vSphere Client, select a datastore in the inventory pane, select the Performance tab, and select View > Performance. Here, you can observe the following data:
Average latency and aggregated IOPS
Host latency
Host queue depth
Host read/write IOPS
Virtual machine disk read/write latency
Virtual machine disk read/write IOPS
The default threshold for SIOC to begin prioritizing I/O based on shares is 30 ms and typically does not need to be modified. However, you can modify this threshold if you need to. Be aware that SIOC will not function properly unless all the data-stores that share drive spindles have the same threshold defined. If you set the value too low, shares will enforce priority of resources sooner but could decrease aggregated throughput, and if you set it too high, the result might be higher aggregated throughput but less prioritization of disk I/O.
The following procedure allows you to modify the threshold:
Step 1. In the vSphere Client Storage Inventory view, select a datastore and select the Configuration tab.
Step 2. Select Properties and under Storage I/O Control, select Enabled if it is not already.
Step 3. Click Advanced to modify the threshold for contention; this value must be between 10 ms and 100 ms.
Step 4. Click OK and then click Close.
The procedure to reset the threshold to the default is similar:
Step 1. In the vSphere Client Storage Inventory view, select a datastore and select the Configuration tab.
Step 2. Select Properties and under Storage I/O Control, select Advanced.
Step 3. Click Reset.
Step 4. Click OK and then click Close.
This section provides details on configuring and managing Non-Volatile Memory Express (NVMe).
As described in Chapter 2, Non-Volatile Memory Express (NVMe) devices are a high-performance alternative to SCSI storage. There are three mechanisms for NVMe:
NVMe over PCIe: NVMe over PCIe is for local storage, and NVMe over fabrics (NVMe-oF) is for connected storage.
NVMe over Remote Direct Memory Access (RDMA): NVMe over RDMA is shared NVMe-oF storage using RDMA over Converged Ethernet (RoCE) Version 2 transport.
NVMe over Fibre Channel (FC-NVMe): FC-NVMe is shared NVMe-oF storage using Fibre Channel transport.
Chapter 2 describes the requirements for each of these mechanisms.
After you install the hardware for NVMe over PCIe, ESXi detects it as a storage adapter that uses PCIe. You can use the vSphere Client to view the storage adapter and storage device details. No other configuration is needed.
When using NVMe storage for shared storage, you must not mix transport types on the same namespace. You should ensure that the active paths are presented because they cannot be registered until the path has been discovered.
Table 11-7 identifies additional information about NVMe over Fabric versus SCSI over Fabric storage.
Table 11-7 SCSI over Fabric and NVMe over Fabric Comparison
Shared Storage Capability |
SCSI over Fabric |
NVMe over Fabric |
---|---|---|
RDM |
Supported |
Not supported |
Coredump |
Supported |
Not supported |
SCSI-2 reservations |
Supported |
Not supported |
Shared VMDK |
Supported |
Not supported |
vVols |
Supported |
Not supported |
Hardware acceleration with VAAI plug-ins |
Supported |
Not supported |
Default MPP |
NMP |
HPP (NVMe-oF targets cannot be claimed by NMP.) |
Limits |
LUNs=1024, paths=4096 |
Namespaces=32, paths=128 (maximum 4 paths per namespace in a host) |
To use FC-NVMe, you must add an appropriate supported adapter and use the following procedure to add the controller to the host:
Step 1. In the vSphere Client, select the host in the inventory pane and navigate to Configure > Storage > Storage Adapters.
Step 2. Click Controllers > Add Controller.
Step 3. Select one of the following options:
Automatically Discover Controllers: Click Discover Controllers and select a controller.
Enter Controller Details Manually: Provide the subsystem NQN, the worldwide node name, and the worldwide port name. Optionally, provide an admin queue size and keepalive timeout.
You can configure an ESXi 7.0 host to access shared NVMe devices using RDMA over Converged Ethernet (RoCE) Version 2. The host must have a network adapter that supports RoCE Version 2, and you must configure a software NVMe over RDMA adapter.
For hosts with a NIC that supports RoCE Version 2, the vSphere Client shows both the network adapter component and the RDMA component. You can select the host in the inventory pane and navigate to Configure > Networking > RDMA Adapters. Here you can see the unique names assigned to the RDMA devices, such as vmrdma0. For each device, you can see its paired uplink (that is, its integrated NIC, such as vmnic9). To complete the host configuration, you can use the following procedure:
Step 1. Create a new VMkernel virtual network adapter on a vSphere standard or distributed switch and configure its uplink to use the RDMA paired uplink (for example, vmnic9).
Step 2. Select the host and navigate to Configure > Networking > RDMA Adapters.
Step 3. Select the appropriate RDMA device (for example, vmrdma0) and select VMkernel Adapters Bindings in the details pane.
Step 4. Verify that the new VMkernel adapter (for example, vmk2) appears.
Step 5. Select the host and navigate to Configure > Storage > Storage Adapters.
Step 6. Click the Add Software Adapter button.
Step 7. Select Add Software NVMe over RDMA Adapter.
Step 8. Select the appropriate RDMA adapter (for example, vmrdma0) and click OK.
Step 9. In the list of storage adapters, identify the new adapter in the category VMware NVME over RDMA Storage Adapter and make note of its assigned device number (for example, vmhba71).
Step 10. To identify the available storage devices, select the storage adapter (for example, vmhba71) and select Devices in the details pane. You can use these devices to create VMFS datastores.
As described in Chapter 2, High-Performance Plug-in (HPP) is the default plug-in that claims NVMe-oF targets. NVMe over PCIe targets default to the VMware Native Multipathing Plug-in (NMP). You can use the esxcli storage core claimrule add command to change the claiming plug-in in your environment. For example, to set a local device to be claimed by HPP, use the --pci-vendor-id parameter and set the --plugin parameter to HPP. To change the claim rule based on an NVMe controller model, use the --nvme-controller-model parameter.
To assign a specific HPP Path Selection Scheme (PSS) to a specific device, you can use the esxcli storage hpp device set command with -pss parameter to specify the scheme and the --device parameter to specify the device. The available HPP PSS options are explained in Table 2-6 in Chapter 2. To create a claim rule that assigns the HPP PSS by vendor and model, you can use esxcli storage core claimrule add with the -V (vendor), -M (model), -P (plug-in), and --config-string parameters. In the value for --config-string, specify the PSS name and other settings, such as “pss=LB-Latency,latency-eval-time=40000.”
Note
Enabling HPP on PXE-booted ESXi hosts is not supported.
After using these commands, you should reboot the hosts to apply the changes.
PMem devices are non-volatile dual in-line memory modules (NVDIMMs) on the ESXi host that reside in normal memory slots. They are non-volatile and combine the performance of volatile memory with the persistence of storage. PMem devices are supported on ESXi 6.7 and later.
ESXi hosts detect local PMem devices and expose the devices as host-local PMem datastores to virtual machines. Virtual machines can directly access and utilize them as either memory (virtual NVDIMM) or storage (PMem hard disks). An ESXi host can have only one PMem datastore, but it can be made up of multiple PMem modules.
In vPMem mode, a virtual machine can directly access PMem resources and use the resources as regular memory. The virtual machine uses NVDIMMs that represent physical PMem regions. Each virtual machine can have up to 64 virtual NVDIMM devices, and each NVDIMM device is stored in the host-local PMem datastore. Virtual machines must be at hardware Version 14, and the guest OS must be PMem aware.
In vPMemDisk mode, a virtual machine cannot directly access the PMem resources. You must add a virtual PMem disk to the virtual machine. A virtual PMem disk is a regular virtual disk that is assigned a PMem storage policy, forcing it to be placed on a host-local PMem datastore. This mode has no virtual machine hardware or operating system requirements.
The following are components of the PMem structure on an ESXi host:
Modules: These are the physical NVDIMMs that reside on the motherboard.
Interleave sets: These are logical groupings of one or more modules. ESXi hosts read from an interleave set in turns, so if there are two modules on an ESXi host, they will be read in parallel. You can identify the way the NVDIMMs are grouped into interleave sets via the vSphere Client.
Namespaces: PMem datastores are built on top of namespaces, which are regions of contiguously addressed memory ranges.
To view information about the PMem modules, interleave sets, and namespaces, you can follow this procedure:
Step 1. In the vSphere Host Client, select Storage from the inventory pane.
Step 2. Click on the Persistent Memory tab.
Step 3. Click Modules to see the NVDIMMs that contribute to the PMem datastore.
Step 4. Click Namespaces to see namespace information.
Step 5. Click Interleave Sets to see how the modules are grouped into interleave sets.
To delete namespaces that were created by an operating system that was previously installed on the host machine, you can use this procedure to navigate to Namespaces, select the namespace, and click Delete. This frees up the PMem space, but you must reboot the host to access it.
This section provides information on managing storage multipathing, storage policies, and Virtual Volumes (vVOLs) in vSphere 7.0.
As explained in Chapter 2, ESXi uses the Pluggable Storage Architecture (PSA), which allows plug-ins to claim storage devices. The plug-ins include the Native Multipathing Plug-in (NMP), the High-Performance Plug-in (HPP), and third-party multipathing modules (MPPs).
You can use esxcli commands to manage the PSA plug-ins. For example, you can use the following command from an ESXi shell to view the multipathing modules (plug-ins):
esxcli storage core plugin list –plugin-class=MP
You can use the following command to list all devices controlled by the NMP module. For each device, you will find details, such as assigned storage array type (SATP) and the path selection policy (PSP):
esxcli storage nmp device list
To see details for a specific device, you can provide the --device option with the previous command. For example, if you have a device that is identified by mpx.vmbha0:C0:T0:L0, you can use the following command to retrieve details for just that device:
esxcli storage nmp device list --device=mpx.vmbha0:C0:T0:L0
Table 11-8 provides information on some other esxcli commands that you can use with NMP.
Table 11-8 ESXLI Commands for NMP
Command |
Description |
---|---|
esxcli storage nmp satp list |
Provides information for each available SATP, including the default PSP |
esxcli storage nmp psp list |
Provides a description for each available PSP |
esxcli storage nmp satp set --default-psp=policy --satp=satpname |
Changes the default PSP policy for an SATP named satpname, where the policy is VMW_PSP_MRU, VMW_PSP_FIXED, or VMW_PSP_RR, as explained in Table 2-11 in Chapter 2 |
Note
In many cases, the storage system provides ESXi with the storage device names and identifiers, which are unique and based on storage standards. Each identifier uses a naa.xxx, eui.xxx, or t10.xxx format. Otherwise, the host generates an identifier in the form mpx.path, where path is the first path to the device, such as mpx.vmhba1:C0:T1:L3.
Table 11-9 provides information on some esxcli commands that you can use with HPP.
Table 11-9 ESXCLI Commands for HPP
Command |
Description |
Options |
---|---|---|
esxcli storage hpp path list |
Lists which paths are claimed by HPP |
-d | --device -p | --path=<path> |
esxcli storage hpp device list |
Lists devices controlled by HPP |
-d | --device=<device> |
esxcli storage hpp device set |
Configures HPP settings |
-B | --bytes=<max_bytes_on_path> -g | --cfgfile -d | --device=<device> -I | --iops=<max_iops_on_path> -T | --latency-eval-time=<interval_in_ms> -M | --mark-device-ssd=<value> -p | --path=<path> -S | --sampling-ios-per-path=<value> -P | --pss=<FIXED|LB-Bytes|LB-IOPs|LB-Latency|LB-RR> |
esxcli storage hpp device usermarkedssd list |
Lists devices that someone marked as SSD |
-d | --device=<device> |
This section provides information on using the vSphere Client to manipulate the path selection policy and available paths for NMP. For information on selecting PSS for HPP, see the “Configuring NVMe and HPP” section, earlier in this chapter.
A path to a storage device is represented as the storage adapter, storage channel number, target number, and LUN number (the LUN position within the target) set that is used to connect to the device. For example, vmhba1:C0:T1:L3 indicates that the path uses storage adapter vmhba1, channel 0, target 1, and LUN 3. To view the storage paths for a specific device, you can use the following procedure:
Step 1. In the vSphere Client, select a host in the inventory pane and navigate to Configure > Storage > Storage Devices.
Step 2. Select the storage device.
Step 3. Click the Properties tab and review the details. For NMP devices, the details include the assigned SATP and PSP.
Step 4. Click Paths and review the available paths to the device. The status for each path can be Active (I/O), Standby, Disabled, or Dead. For devices using the Fixed path policy, an asterisk (*) represents the preferred path.
To disable a path to a storage device, you can follow this procedure, select a path, and choose Disable.
In the vSphere Client, you can select a VMFS datastore and navigate to Configure > Connectivity and Multipathing to review information on the paths to the storage devices backing the datastore.
To change the PSP that is assigned to a storage device, you can navigate to the device’s Properties page (see the previous procedure) and click Edit Multipathing. On the multipathing page, you can choose a policy, such as VMW_PSP_FIXED, VMW_PSP_RR, or VMW_PSP_MRU, as described in Table 2-11 in Chapter 2.
ESXi uses claim rules to determine which multipathing module owns the paths to a specific storage device and the type of multipathing support that is applied. Core claim rules determine which multipathing module (NMP, HPP, or a third-party module) claims a device. For NMP, SATP claim rules determine which SATP submodule claims the device. Table 11-10 describes a few commands for claim rules.
Table 11-10 Sample Claim Rules Commands
Command |
Description |
---|---|
esxcli storage core claimrule list --claimrule-class=MP |
Lists the claim rules on the host. |
esxcli storage core claimrule add |
Defines a new claim rule. The rule may contain multiple options, such as plug-in (-P), model (-M), and vendor (-V). The value for the plug-in can be NMP, HPP, MASK_PATH, or a third-party plug-in name. |
esxcli storage core claimrule load |
Loads new claim rules into the system. |
esxcli storage core claimrule run |
Applies the loaded claim rules. |
In vSphere, Storage Policy Based Management (SPBM) can be used to align storage with the application demands of your virtual machines. With SPBM, you can assign a storage policy to a virtual machine to control the type of storage that can be used by the virtual machine and how the virtual machine is placed on the storage. You can apply a storage policy as you create, clone, or migrate a virtual machine.
Prior to creating virtual machine storage policies, you must populate the VM Storage Policy interface with information about storage entities and data services in your storage environment. When available, you can use a vSphere APIs for Storage Awareness (VASA) provider to provide the information. Alternatively, you can use datastore tags.
To use storage policies, multiple steps are required, and the particular steps depend on the type of storage or services you need. This section describes the major steps.
In many cases, VM storage policies populate automatically. However, you can manually assign tags to datastores as follows:
Step 1. Create a category for the storage tags:
In the vSphere Client, select Home > Tags & Custom Attributes.
Click Tags > Categories.
Click Add Category.
Define the following:
Category Name
Description
Tags per Object
Associable Object Types.
Click OK.
Step 2. Create a storage tag:
Click Tags on the Tags tab.
Click Add Tag.
Define the tag properties:
Name
Description
Category
Click OK.
Step 3. Apply the tag to the datastore:
In the storage inventory view, right-click the datastore and select Tags & Custom Attributes > Assign Tag.
Select a tag from the list and click Assign.
To create a VM storage policy, follow this procedure:
Step 1. For host-based services:
In the vSphere Client, select Home > Policies and Profiles.
Click VM Storage Policies.
Select Create VM Storage Policy.
Provide the following information:
vCenter Server
Name
Description
On the Policy Structure page:
Select the tab for the data service category.
Define custom rules.
Review the datastores that match the policy.
Review the settings and click Finish.
Step 2. For vVols:
In the vSphere Client, select Home > Policies and Profiles.
Click VM Storage Policies.
Select Create VM Storage Policy.
Provide the following information:
vCenter Server
Name
Description
On the Policy Structure page, enable rules.
On the Virtual Volumes Rules page, set storage placement rules:
Select Placement > Add Rule.
Use the Add Rule drop-down to define the available capacity.
Click Tags to define tag-based rules
Optionally, set rules for datastore-specific services.
On the Storage Compatibility page, review the datastores matching the policy.
On the Review and Finish page, verify the settings and click Finish.
Step 3. For tag-based placement:
In the vSphere Client, select Home > Policies and Profiles.
Click VM Storage Policies.
Select Create VM Storage Policy.
Provide the following information:
vCenter Server
Name
Description
On the Policy Structure page, select Add Tag Rule and define the tag category, usage option, and tags. Repeat as needed.
Review the datastores that match the policy.
Verify the policy settings on the Review and Finish page and click Finish.
To collect storage entities and data services information from a VASA storage provider, you can use the following procedure:
Step 1. In the vSphere Client, select vCenter in the inventory pane and navigate to Configure > Storage Providers.
Step 2. Click the Add icon.
Step 3. Provide the connection information for the provider, including the name, URL, and credentials.
Step 4. Select one of the following security methods:
Select the Use Storage Provider Certificate option and define the location of the certificate.
Review and accept the displayed certificate thumbprint that is displayed.
Step 5. Storage provider adds the vCenter certificate to the truststore when the vCenter server initially connects to the storage device.
Step 6. Click OK.
To perform management operations involving a storage provider, such as rescanning, you can use the following procedure:
Step 1. In the vSphere Client, select vCenter in the inventory pane and navigate to Configure > Storage Providers.
Step 2. Select a storage provider and choose one of the following options:
Synchronize Storage Providers: Synchronizes vCenter Server with information for all storage providers.
Rescan: Synchronizes vCenter Server with information from a specific storage provider.
Remove: Unregisters a specific storage provider, which is useful whenever upgrading a storage provider to a later VASA version requires you to unregister and reregister.
Refresh Certificate: Refreshes a certificate before it retires.
When performing a provisioning, cloning, or migration operation for a virtual machine, you can assign a storage policy. For example, when using the New Virtual Machine wizard, on the Select Storage page, you can select a policy in the VM Storage Policy drop-down menu to assign a storage policy to the entire virtual machine. After selecting the policy, you should choose a datastore from the list of compatible datastores. If you use the replication service provided with vVols, you should either specify a preconfigured replication group or choose to have vVols create an automatic replication group.
Optionally, you can set different storage policies for each virtual disk. For example, on the Customize Hardware page, you can select New Hard Disk and set a VM storage policy for that disk.
To change the storage policy assigned to a virtual machine, you can use the following procedure:
Step 1. In the vSphere Client, navigate to Menu > Policies and Profiles > VM Storage Policies.
Step 2. Select a storage policy and click VM Compliance.
Step 3. Select a virtual machine that is currently assigned the selected policy.
Step 4. Click Configure > Policies.
Step 5. Click Edit VM Storage Policies.
Step 6. Assign the appropriate policy to the virtual machine or assign separate policies to different virtual disks.
Step 7. If you use the replication service provided with vVols, configure the replication group.
Step 8. Click OK.
To use vVols, you must ensure that your storage and vSphere environment are properly configured.
To work with vVols, the storage system must support vVols and must integrate with vSphere using VASA. You should consider the following guidelines:
The storage system must support thin provisioning and snapshotting.
You need to deploy the VASA storage provider.
You need to configure the following components on the storage side:
Protocol endpoints
Storage containers
Storage profiles
Replication configurations if you plan to use vVols with replication
You need to follow appropriate setup guidelines for the type of storage you use (Fibre Channel, FCoE, iSCSI, or NFS). If necessary, you should install and configure storage adapters on your ESXi hosts
You need to use NTP to ensure time synchronization among the storage system components and vSphere.
To configure vVols, you can use the following procedure:
Step 1. Register the storage providers for vVols, as described previously.
Step 2. Create a vVols datastore by using the following steps:
In the vSphere Client, right-click a host, cluster, or data center in the inventory pane and select Storage > New Datastore.
Select vVol as the datastore type
Select the hosts that will access the datastore.
Click Finish.
Step 3. Navigate to Storage > Protocol Endpoints to examine and manage the protocol endpoints. Optionally, you can take the following steps:
Use the Properties tab to modify the multipathing policy.
Use the Paths tab to change the path selection policy, disable paths, and enable paths.
As mentioned in the section “How to Use This Book” in the Introduction, you have some choices for exam preparation: the exercises here, Chapter 15, “Final Preparation,” and the exam simulation questions on the companion website.
Review the most important topics in this chapter, noted with the Key Topics icon in the outer margin of the page. Table 11-12 lists these key topics and the page number on which each is found.
Table 11-12 Key Topics for Chapter 11
Key Topic Element |
Description |
Page Number |
---|---|---|
List |
vSAN characteristics |
|
List |
Shutting down a vSAN cluster |
|
List |
Creating a stretched vSAN cluster |
|
List |
Increasing the size of a datastore |
|
List |
Creating an NFS datastore |
|
List |
Setting SIOC shares and limits |
|
Sample claim rules commands |
||
List |
Implementing storage policies |
Print a copy of Appendix B, “Memory Tables” (found on the companion website), or at least the section for this chapter, and complete the tables and lists from memory. Appendix C, “Memory Tables Answer Key” (also on the companion website), includes completed tables and lists to check your work.
Define the following key terms from this chapter and check your answers in the glossary:
1. You are implementing encryption for a vSAN cluster in vSphere 7.0. Which of the following options is a requirement?
Deploy KMIP 1.0.
Deploy the KMS as a virtual machine in the vSAN datastore.
Ensure that the KMS is in the vSphere compatibility matrices.
Ensure that the witness host participates in encryption.
2. You want to save space in your vSAN cluster by removing redundant data blocks. Which of the following steps should you take?
Enable Compression.
Enable Deduplication.
Enable Deduplication and Compression.
Enable Allow Reduced Redundancy.
3. In your vSphere 7.0 environment, you are using the Datastore Browser to perform administrative tasks. Which of the following options is not available in the Datastore Browser?
Upload files
Download
Mount
Inflate
4. For your vSphere 7.0 environment, you are comparing NVMe-oF with SCSI over Fibre Channel. Which one of the following statements is true?
Virtual volumes are supported with NVMe-oF.
SCSI-2 reservations are supported with NVMe-oF.
RDMs are supported with NVMe-oF.
HPP is supported with NVMe-oF.
5. You are using the vSphere Client to manage the storage providers. Which one of the following is not an option?
Replace
Synchronize Storage Providers
Rescan
Refresh Certificate