Sharing data between users and hosts is a core feature of every corporate network. In Chapter 5, “Managing Storage,” you looked at managing disks and volumes/partitions. In this chapter, you examine the sharing of files between systems and some mechanisms for doing so.
The Server Message Block (SMB) file‐ and printer‐sharing protocol has been in use since the late 1980s as a mechanism for storing data on a server and accessing it from client systems. All recent versions of Windows include an SMB client and SMB server feature, allowing users to share and use shared files.
Security of shared files is important. The SMB protocol has been around for a long time, and the first version, SMB1, is not considered safe for use so should be disabled. (Disabling SMB1 on a host can mean that older systems may not be able to connect to the host.)
When you share files, you can provide additional access controls over the share itself. The user of any shared folder or file is entitled to the lesser of the NTFS permissions and the share permissions. This can simplify the setting of permissions.
Windows Server comes with a number of file server features (each of which you can install using
Install‐WindowsFeature
), which are organized as subfeatures of the
FileAndStorage‐Services
role. The
FileAndStorage‐Services
role, which enables some basic file sharing, is installed by default in Windows Server 2019. The subfeatures, which are not installed by default, include these:
All of these features can be used in any organization that needs to share files or data. Covering all these topics properly would take more space than is available.
This chapter looks at the following:
SmbShare
module augmented by the cmdlets in the downloadable
NTFSSecurity
module.In this chapter, you use the following systems:
Reskit.Org
domain. It also provides a DNS service for the
Reskit.Org
domain.FS1
and then use the iSCSI initiator on
FS1
to connect to the iSCSI
target
on SRV2. Finally, you also use
FS1
and
FS2
to deploy an SOFS.Figure 6.1 shows the systems in use in this chapter.
Figure 6.1: Systems used in this chapter
Note that all systems need PowerShell 7 loaded before starting. You can do that manually, using the scripts from Chapter 1, “Establishing a PowerShell 7 Administrative Environment.”
The SMB protocol is a network protocol that runs on top of TCP/IP and is used to share access to files, printers, and other resources on your network. All currently supported versions of Windows (server and client) contain an SMB client and an SMB server.
In Windows, the SMB server is implemented by the
LanmanServer
service. In Linux and Unix, the Samba project (www.samba.org) provides an SMB server and client that interoperate with Windows clients/servers. For more information about the SMB protocol, see docs.microsoft.com/en-us/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview.
The SMB protocol has evolved significantly since it was first introduced with Microsoft's LAN Manager in the late 1980s, and SMB Version 1 is no longer considered safe for use. An important step in securing your file servers is to ensure you disable this version of the SMB protocol.
The latest version of the SMB protocol, SMB3, contains a number of significant improvements. These include SMB Scale‐Out, SMB Multichannel, and SMB Direct, which all improve the performance and resilience of SMB and enable you to store SQL databases as well as Hyper‐V virtual hard drives on an SMB3 file server. All current versions of Windows contain SMB3 support. For more details on SMB3, see docs.microsoft.com/en-us/windows-server/storage/file-server/file-server-smb-overview.
In this section, you use
FS1.Reskit.Org
, a domain‐joined Windows Server 2019 host with no additional features installed (and with Internet access). To assist with DNS resolution, you must also have a DC in the
Reskit.Org
domain,
DC1.Reskit.Org
, online. Also, ensure you have installed PowerShell 7 on this host (and optionally VS Code). You can use the scripts in Chapter 1 to do this.
To create a file server using Windows Server 2019, you use the Server Manager module to add the necessary services and tools.
# 1. Add File Server features to FS1
Import-Module -Name ServerManager -WarningAction SilentlyContinue
$Features = 'FileAndStorage-Services',
'File-Services',
'FS-FileServer'
Install-WindowsFeature -Name $Features -IncludeManagementTools
You can view the output of this snippet in Figure 6.2.
Figure 6.2: Installing file server features
With this snippet, you install the Windows Compatibility module, import the ServerManager module, and then install the file server features. Once this is complete, FS1 is capable of being a file server—you just need to configure the server and share folders.
You can use the
Get‐SmbServerConfiguration
command to view the default configuration of the SMB service in
FS1
.
# 2. Get Default SMB Server Settings
Get-SmbServerConfiguration
The output, shown in Figure 6.3, shows the default property settings for the SMB service.
Before putting a file server into production, you should review the 43 properties of your file server. These default settings have changed over the different versions of Windows Server, so it's important that you check these properties and update them where needed.
Version 1 of the SMB protocol contains a vulnerability that enables an intruder to run arbitrary code. See cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-0144 for details of the vulnerability and the systems affected. The WannaCry ransomware, for example, exploited this weakness.
Figure 6.3: Viewing SMB server settings
By default, Windows Server 2019 has SMB1 disabled. It is, though, a good thing to make sure this protocol is disabled explicitly.
# 3. Ensure SMB V1 is turned off
$CHT = @{
EnableSMB1Protocol = $false
Confirm = $false
}
Set-SmbServerConfiguration @CHT
In this snippet, you explicitly disable the SMB1 protocol. Alternatively, you could have tested whether SMB1 was enabled and only then explicitly disable SMB1.
With Windows PowerShell, you could have used Desired State Configuration (DSC) to ensure that the SMB1 remains turned off. Unfortunately, as described in Chapter 2, “PowerShell 7 Compatibility with Windows PowerShell,” DSC is not supported fully by PowerShell 7.
The SMB protocol by default transfers all data unencrypted. On an internal network this may not matter, but it does represent a potential vulnerability. Two things you can do to improve network security are to encrypt any transferred data and sign each packet.
# 4. Turn on SMB signing and encryption
$SHT1 = @{
RequireSecuritySignature = $true
EnableSecuritySignature = $true
EncryptData = $true
Confirm = $false
}
Set-SmbServerConfiguration @SHT1
Signing and encrypting packets takes additional CPU time. This could be an issue on a busy file server serving hundreds of users simultaneously. If you are implementing file servers as virtual machines, you might consider adding one or more virtual CPUs to any file server VM if the VM shows a high CPU load (a CPU usage of 80% or more over a sustained time).
Encryption on the client side should not provide any significant performance issue.
Windows servers and clients create a number of default shares, also known as administrative shares. These shares are hidden and enable IT pros to have remote access to each disk volume on a network‐connected system. You cannot delete these shares permanently, but you can disable them.
# 5. Turn off default server and workstations shares
$SHT2 = @{
AutoShareServer = $false
AutoShareWorkstation = $false
Confirm = $false
}
Set-SmbServerConfiguration @SHT2
By default, SMB servers announce themselves on the network. This could be a potential security risk, but it is easy to stop.
# 6. Turn off server announcements
$SHT3 = @{
ServerHidden = $true
AnnounceServer = $false
Confirm = $false
}
Set-SmbServerConfiguration @SHT3
The SMB server settings you have set do not take effect until you restart the service.
# 7. Restart the service with the new configuration
Restart-Service -Name LanmanServer
Once you have reconfigured the SMB server and restarted the service, you can review the SMB server settings and observe the updated configuration.
# 8. Review SMB Server Configuration
Get-SmbServerConfiguration
Figure 6.4 shows the results of the configuration changes.
As you can see from the figure, the settings you configured are now in operation on the running SMB service. There are many more settings you can set for an SMB server, although most of them remain not well documented.
SMB shares in Windows can be secured independently of any underlying filesystem security. Irrespective of the filesystem you implement, you can provide ACLs to SMB shares to control access to the underlying data. With the NTFS filesystem, you are able to set share access permissions as well as filesystem permissions.
Figure 6.4: Viewing the reconfigured SMB server settings
Managing permissions for files (in an NTFS volume) is not fully supported by in‐the‐box cmdlets. You can use the
Get‐ACL
and
Set‐ACL
commands to update an ACL, but you have to use .NET Framework objects to create the individual ACEs you want to add to any ACL. The
NTFSSecurity
module makes updating ACLs much easier.
This section uses
FS1
, a server you set up in “Setting Up and Securing an SMB File Server.” You also need the domain controller,
DC1
. This section also uses the AD group
Sales
created in “Managing NTFS Permissions” in Chapter 5.
This section creates a new folder,
C:\Sales
. Later in this section, you also make use of commands in the
NTFSSecurity
module in this section, and you need to ensure you have it installed. You can do both tasks as follows:
# 1. Ensure folder exists and install NTFS Security module
$EAHT = @{Erroraction = 'SilentlyContinue' }
New-Item -Path C:\Sales1 -ItemType Directory @EAHT | Out-Null
Install-Module -Name NTFSSecurity -Force
This snippet ensures that the
C:\Sales1
folder exists on
FS1
and that the
NTFSSecurity
module is installed.
To discover the SMB shares available on your server, use the
Get‐SmbShare
command.
# 2. Discover existing SMB shares on FS1
Get-SmbShare -Name *
You can see the output from this snippet in Figure 6.5.
Figure 6.5: Viewing shares on FS1
Notice that there is only one share on
FS1
, the
IPC$
interprocess communication share. The
IPC$
share is built into Windows and is used when performing remote administration of a computer or viewing a computer's shared resources.
You can see more information about the
IPC$
share at support.microsoft.com/en-us/help/3034016/ipc-share-and-null-session-behavior-in-windows.
To create a new SMB share to the
C:\Sales1
folder on
FS1
, you use the
New‐SmbShare
command.
# 3. Creating a new share Sales1
New-SmbShare -Name Sales1 -Path C:\Sales1
Figure 6.6 shows the output of this command.
Figure 6.6: Creating an SMB share
You could also have used the older
Net.exe
command, which has been part of Windows since Windows NT first shipped.
It is useful to add a description to the share, which you can do with the
Set‐SmbShare
command.
# 4. Set the share's Description
$CHT = @{Confirm=$False}
Set-SmbShare -Name Sales1 -Description 'Sales share on FS1' @CHT
The description field can help users find the correct share. You could also have created the description at the same time you created the share.
You can set the folder enumeration mode for a share to AccessBased. This tells Windows to not display any folders within a share that the user does not have access to.
# 5. Setting folder enumeration mode
$CHT = @{Confirm = $false}
Set-SmbShare -Name Sales1 -FolderEnumerationMode AccessBased @CHT
This is a useful approach as it helps to avoid questions from curious users who see folders in a share to which they have no access.
As you saw in “Setting Up and Securing an SMB File Server,” you can configure the system to always encrypt any shared data transmitted to/from an SMB share. You can configure Windows to always encrypt the traffic for that specific share with this
Set‐SmbShare
command:
# 6. Require encryption on data transmistted to/from the share
Set-SmbShare –Name Sales1 -EncryptData $true @CHT
This ensures that data transferred to/from this share is to be encrypted. For more details about SMB encryption, see docs.microsoft.com/en-us/windows-server/storage/file-server/smb-security#smb-encryption.
Requiring encryption increases CPU usage on a file server. As noted on the SMB Security page, “… there is a notable performance operating cost with any end‐to‐end encryption protection when compared to non‐encrypted.”
As always with settings like this, you should be measuring the CPU utilization of your file server and take appropriate actions to minimize the impact of any performance bottlenecks. In Chapter 10, “Reporting,” the section “Collecting Performance Data Using PLA” shows how to collect this information, and “Reporting on PLA Performance Data” shows how you can create a graph of server performance.
By default, when you create a new share, Windows enables the Everyone group to have read access to the share. To restrict access to the share, you first remove that universal access, using
Revoke‐SmbShareAccess
.
# 7. Removing all access to Sales1 share for the Everyone group
$AHT1 = @{
Name = 'Sales1'
AccountName = 'Everyone'
Confirm = $false
}
Revoke-SmbShareAccess @AHT1 | Out-Null
This has the effect of, initially, denying everyone access to the data within the SMB share. Once you have revoked all access, you can set the specific permissions appropriate to the share to enable the security you need for the share.
To enable administrator read access to the share, you can use the
Grant‐Smb‐ShareAccess
command.
# 8. Adding Reskit\Domain Admins to the share
$AHT2 = @{
Name = 'Sales1'
AccessRight = 'Read'
AccountName = 'Reskit\Domain Admins'
ConFirm = $false
}
Grant-SmbShareAccess @AHT2 | Out-Null
This snippet gives the domain's Domain Admins group read access to the share. By default, a domain or enterprise administrator can take ownership of a file and then give themselves more permissions should that ever become necessary. Giving domain admins only basic read access, as in this example, may or may not be appropriate in day‐to‐day operations.
To ensure that Windows continues to have access to the folder, you can add another access control entry to the share's ACL.
# 9. Adding system full access
$AHT3 = @{
Name = 'Sales1'
AccessRight = 'Full'
AccountName = 'NT Authority\SYSTEM'
Confirm = $False
}
Grant-SmbShareAccess @AHT3 | Out-Null
You also need to enable the owner or creator of a file to have full access to the files/folders they create.
# 10. Set Creator/Owner to Full Access
$AHT4 = @{
Name = 'Sales1'
AccessRight = 'Full'
AccountName = 'CREATOR OWNER'
Confirm = $False
}
Grant-SmbShareAccess @AHT4 | Out-Null
You can also grant the Sales group change access to the share.
# 11 Granting Sales group change access
$AHT5 = @{
Name = 'Sales1'
AccessRight = 'Full Control'
AccountName = 'Sales'
Confirm = $false
}
Grant-SmbShareAccess @AHT5 | Out-Null
In this snippet, you give all members of the Sales group change access over data in the share. This is a simple share permission to set, but it does mean that any member can make changes to any file.
You can, of course, limit access to data in the share by changing the ACL on the underlying NTFS files or folders. Although a user might have full control at the share level, you can set more restrictive NTFS permissions where that is appropriate.
With the configuration of this share completed, you can view the new access rights on the share.
# 12. Review Access to Sales1 sShare
Get-SmbShareAccess -Name Sales1 |
Sort-Object AccessRight
Now that you have configured share access rights, you can view the share's resultant access rights, as shown in Figure 6.7.
Figure 6.7: Viewing share access
In this output, you can see that the share has an ACL that consists of the three explicit ACE entries.
It is important to note that these steps have reconfigured only the share's ACL. The NTFS filesystem holds a separate set of permissions that you can set independently from the share permissions. When accessing shared data, the user's effective permissions are the lesser of the NTFS and the share permissions. Thus far in this section, the NTFS permissions remain based on default Windows permissions and are probably overly generous. An important step in securing a file server is managing the default ACLs set by Windows.
You can view the initial NTFS permissions on a folder by using the
Get‐NTFSAccess
command from the
NTFSSecurity
module.
# 13. Review initial NTFS Permissions on the folder
Get-NTFSAccess -Path C:\Sales1
You can see the output in Figure 6.8, showing the current NTFS permissions on the
C:\Sales1
folder.
Figure 6.8: Reviewing NTFS permissions
An advantage of using
Get‐NTFSAccess
is that you can also view inherited permissions. As you can see, the ACL for the
C:\Sales1
folder is made up entirely of inherited permissions—as is normal for newly created folders. In many, and probably most, cases, this can be an overly permissive set of permissions. You can adjust this as needed.
A simple way to ensure that the share and the NTFS permissions are aligned is to use the
Set‐SmbPathAcl
command, like this:
# 14. Setting the NTFS ACL to match share
Set-SmbPathAcl -ShareName 'Sales1'
This command makes the ACL for the
C:\Sales1
folder match the share's ACL. This copies the explicit permissions on the share to the NTFS permissions on the folder.
To complete securing the share and underlying data, you can also remove unwanted inherited ACLs, by removing the inheritance for the
C:\Sales1
folder, like this:
# 15. Removing NTFS Inheritance
Set-NTFSInheritance -Path C:\Sales1 -AccessInheritanceEnabled:$False
Note that, currently, this command generates a spurious error stating “Nullable object must have a value.” Despite the error, this snippet does turn off inheritance on the
C:\Sales1
folder.
Now that you have configured the NTFS access to match only the share's access, you can view the NTFS access.
# 16. Viewing Folder ACL using Get-NTFSAccess
Get-NTFSAccess -Path C:\Sales1 |
Format-Table -AutoSize
You can see the output in Figure 6.9.
Figure 6.9: Viewing folder ACL
As you can see from the output, the only ACEs remaining in the ACL for the
C:\Sales1
folder are those you set explicitly on the share and then copied into the NTFS folder.
In this section, you add and configure a new share on
FS1
. The share you created is on a single host on a single volume and thus is not highly fault tolerant.
For departmental file sharing, as long as regular backups are performed, this configuration is, in many cases, cost‐effective and generally acceptable, especially given the reliability of modern computer systems.
If you are less risk tolerant and have sufficient budget, you can improve the reliability and fault tolerance by ensuring the data volume is protected with some form of RAID and use failover clustering on your file server. “Setting Up a Clustered Scale‐Out File Server” later in this chapter looks at clustering and creating a highly reliable file server solution.
When you deploy a file server, you have a wide choice of storage technologies you can use to store your data. In “Creating and Securing SMB Shares,” you deployed a new share,
Sales1
, based on a folder held locally on the
FS1
host. That share pointed to a local disk, which means there is a potential single point of failure—if a disk fails and you don't have a good backup, you may have lost user information.
Many organizations deploy a storage area network (SAN) to hold information. The SAN can provide great protection security for your organization's data. One popular method of attaching a host to data held on the SAN is to use iSCSI.
By way of background, Small Computer Systems Interface (SCSI) is a storage technology used to connect disk drives with host computers. SCSI provides faster bus speeds and also provides the ability to support larger numbers of disks than IDE/ATA drives. In larger enterprise servers, you typically use SCSI or serial attached SCSI (SAS) disks. These can include both spinning and solid‐state drives.
iSCSI is a TCP/IP‐based protocol that enables you to access what appear to be SCSI disks across TCP/IP networks. iSCSI is a client‐server protocol. An iSCSI server effectively allows access to a disk (defined as a logical unit number, or LUN) on the server from the client. The virtual disk is known as an iSCSI target.
The iSCSI client, known as the iSCSI initiator, connects to the iSCSI target to use the data on the remote disk. The iSCSI initiator enables the client system (or systems) to access the iSCSI virtual disk as if it were local. After connecting to the iSCSI target, you could use the Disk Management application (
diskmgmt.msc
) and view the iSCSI disk as if it were a local disk.
Once you have connected to the iSCSI target, you can use the same commands you used in Chapter 5 to create a volume and manage the data on disk.
For a bit more background in iSCSI terminology, visit lazywinadmin.com/2013/07/create-iscsi-target-using-powershell-on.html.
To deploy an iSCSI target in Windows, you begin by adding some physical storage to your host and creating a local volume. Ideally, you should use hardware RAID to create a fault‐tolerant local volume for your storage server. Within this local volume, you create a virtual iSCSI disk. Then, you expose this disk as an iSCSI target.
If you deploy a physical host, you can implement hardware RAID, create a local volume (using, for example, RAID 5 or RAID 20), and then create the virtual iSCSI disk in that volume.
If you deploy your iSCSI target in a VM, you store the iSCSI virtual disk inside a volume that is held within a VHDX in the Hyper‐V host. This volume holding the VHDX file should also be protected using hardware RAID deployed on the VM host.
The iSCSI target in Windows Server has not been the subject of much development in recent times. It works and is a great solution for test labs or proof‐of‐concept deployments. In production, you may want to use other iSCSI vendors with more up‐to‐date and better‐performing products. With third‐party iSCSI products delivering your iSCSI targets, you should be able to use the iSCSI initiator in Windows to connect to any iSCSI target.
If you are to make heavy use of iSCSI, you might consider using TCP and/or iSCSI offload, moving some of the processing into hardware in your NICs. To check whether offloading is in operation on a host, you can use the
netstat ‐t
command to see which connections to your host are making use of any offload. TCP offloading has been an issue in some cases, so you need to check carefully that enabling a hardware‐based offload solution works in your network. For more on performance tuning your network adapters, take a look at docs.microsoft.com/en-us/windows-server/networking/technologies/network-subsystem/net-sub-performance-tuning-nics.
In this section, you create an iSCSI target on
SRV2
and then use it via the iSCSI initiator on
FS1
.
This demonstration makes use of two servers:
FS1.Reskit.Org
and
SRV2.Reskit.Org
. You create an iSCSI virtual hard disk on
SRV2
and set up an iSCSI target for this virtual hard disk. You then use the iSCSI initiator on
FS1
to connect to the iSCSI target. You also want
DC1.Reskit.Org
online to enable DNS name resolution.
This section also makes use of a new physical disk within the
SRV2
host. Assuming you are using Hyper‐V to host
SRV2
, adding a new disk is easy, as you saw in the Chapter 5's “Managing Disks and Volumes” section. If you are using Hyper‐V, you can run the following code on your Hyper‐V host:
# 0. Add additional disk to hold iSCSI VHD to SRV2 VM
# Run this on the Hyper-V VM Host in an elevated console
# Stop the VM
Stop-VM -VMName SRV2 -Force
# Get File location for the disk in this VM
$VM = Get-VM -VMName SRV2
$Par = Split-Path -Path $VM.HardDrives[0].Path
# Create a new VHD for S drive
$NewPath3 = Join-Path -Path $Par -ChildPath SDrive.VHDX
$D4 = New-VHD -Path $NewPath3 -SizeBytes 128GB -Dynamic
# Work out next free slot on Controller 0
$Free = (Get-VMScsiController -VMName SRV2 |
Select-Object -First 1 |
Select-Object -ExpandProperty Drives).count
# Add new disk to VM
$HDHT = @{
Path = $NewPath3
VMName = 'SRV2'
ControllerType = 'SCSI'
ControllerNumber = 0
ControllerLocation = $Free
}
Add-VMHardDiskDrive @HDHT
# Start the VM
Start-VM -VMName SRV2
If you created your VM as a Type 2 Hyper‐V VM, there is no need to start (or restart) it. If you created your VMs using the Reskit build scripts noted in Chapter 1, then the VM is of Type 1 and does need to be turned off to add more volumes.
Once you have added the virtual hard disk to the VM and restarted the VM, you need to log on to
SRV2
and create a new volume in the new disk, as follows:
# Run on SRV2 once disk added
# Find the new disk
$NewDisk = Get-Disk |
Where-Object PartitionStyle -eq Raw
$NewDisk |
Initialize-Disk -PartitionStyle GPT
# Create a S: volume in newly added disk
$NVHT1 = @{
DiskNumber = $NewDisk.Number
FriendlyName = 'iSCSI'
FileSystem = 'NTFS'
DriveLetter = 'S'
}
New-Volume @NVHT1
With these steps, you have added a new (virtual) hard disk to the
SRV2
VM. You are now ready to create an iSCSI target on
SRV2
.
Because you are setting up
SRV2
to expose an iSCSI target, you need to install the
FS‐ISCSITarget‐Feature
feature on
SRV2
, using the
Install‐WindowsFeature
command.
# 1. Installing the iSCSI target feature on SRV2
Import-Module -Name ServerManager -WarningAction SilentlyContinue
$WFHT = @{
Name = 'FS-iSCSITarget-Server'
IncludeManagementTools = $true
}
Install-WindowsFeature @WFHT
You can see the output of this snippet in Figure 6.10.
Figure 6.10: Installing the iSCSI target feature
With the iSCSI target feature installed, you can view the iSCSI target server settings on
SRV2
by using
Get‐IsciTargetServerSettings
.
# 2. Exploring default iSCSI target server settings
Import-Module -Name IscsiTarget
Get-IscsiTargetServerSetting
Figure 6.11 shows the output from these commands.
Figure 6.11: Viewing iSCSI target server settings
As you can see, the target is currently not clustered and can be reached via both an IPv4 address and an IPv6 address. With more virtual NICs in your VM and with IPv6 enabled, you may see more portal addresses.
You next need to create a folder on
SRV2
to hold the iSCSI virtual disk, using the
New‐Item
command.
# 3. Creating a folder on SRV2 to hold the iSCSI virtual disk
$NIHT = @{
Path = 'S:\iSCSI'
ItemType = 'Directory'
ErrorAction = 'SilentlyContinue'
}
New-Item @NIHT | Out-Null
To create the iSCSI virtual hard disk, you use the
New‐IscsiVirtualDisk
command as follows:
# 4. Creating an iSCSI Virtual Disk
Import-WinModule -Name IscsiTarget
$LP = 'S:\iSCSI\SalesData.Vhdx'
$LN = 'SalesTarget'
$VDHT = @{
Path = $LP
Description = 'LUN For Sales'
SizeBytes = 500MB
}
New-IscsiVirtualDisk @VDHT
You can view the output from this snippet in Figure 6.12.
Figure 6.12: Creating an iSCSI virtual disk
With this snippet, you create a new iSCSI virtual disk of 500MB. In production, you would probably create much larger volumes. In most cases, you would probably configure the virtual disk to use all the space on the physical drive. In production, you would probably implement hardware Redundant Array of Independent Disks (RAID) on the storage server to enable the iSCSI target virtual disk to be fault tolerant.
With the iSCSI virtual disk created, you create an iSCSI target on
SRV2
by using
New‐IscsiServerTarget
.
# 5. Creating the iSCSI target on SRV2
$THT = @{
TargetName = $LN
InitiatorIds = 'IQN:*'
}
New-IscsiServerTarget @THT
You can see the output of this command in Figure 6.13.
Figure 6.13: Creating an iSCSI target
This command creates an iSCSI target, which points to the iSCSI virtual disk. In creating the target, you specify a wildcard initiator ID for initiators allowed to connect to this target. This allows any initiator to connect to the disk, which simplifies deployment. You can specify DNS host names or IP addresses allowed to connect.
The final step in deploying the target is creating a mapping from the iSCSI target to the virtual iSCSI hard disk, with the command.
# 6. Creating iSCSI disk target mapping on SRV2
Add-IscsiVirtualDiskTargetMapping -TargetName $LN -Path $LP
With the mapping created,
SRV2
is now configured as an iSCSI target server. It can allow any iSCSI initiator to connect to this new LUN. To demonstrate using this iSCSI target, you can use any iSCSI initiator.
To see the Windows Server iSCSI initiator in action, complete the remaining steps in this section on
FS1
.
With an iSCSI target created on
SRV2
, you can now access it using the built‐in Windows iSCSI initiator on
FS1
. The iSCSI initiator service is installed in Windows Server 2019 by default, although the service is configured to not start. Enter the following commands to start the service and to set the service to start automatically after restarting the host:
# 7. Configuring the iSCSI service to auto start, then start the service
# Run on FS1
Set-Service -Name MSiSCSI -StartupType 'Automatic'
Start-Service -Name MSiSCSI
If your iSCSI initiator (client) is Windows 10, then feature updates can, and do, reset the MSiSCSI service's startup type to the default (not started). The startup type for servers does not change. Depending on your host, you may see the occasional warning message “Waiting for the Service ‘Microsoft iSCSI Initiator Service (MSiSCSI)' to start….”
To use the iSCSI target on
FS1
, you need to set up the iSCSI portal, which is the mechanism iSCSI uses to find iSCSI targets. To do this, use the
New‐IscsiTargetPortal
command.
# 8. Setup portal to SRV2
Import-Module -Name Iscsi -WarningAction SilentlyContinue
$PHT = @{
TargetPortalAddress = 'SRV2.Reskit.Org'
TargetPortalPortNumber = 3260
}
New-IscsiTargetPortal @PHT
Creating the iSCSI target portal produces the output you can see in Figure 6.14.
An iSCSI initiator uses the portal to discover the iSCSI targets on the remote machine.
Figure 6.14: Creating the iSCSI target portal
Now that you have the iSCSI initiator set up, you can view the iSCSI target on
SRV2
.
# 9. Find and view the SalesTarget on portal
$Target = Get-IscsiTarget
$Target
The output from this snippet, in Figure 6.15, shows the Sales Target LUN that is now available via the iSCSI portal.
Figure 6.15: Viewing the SalesTarget
Connecting to the target enables
FS1
to have access to the iSCSI disk on
SRV2
.
# 10. Connecting to the target on SRV2
$CHT = @{
TargetPortalAddress = 'SRV2.Reskit.Org'
NodeAddress = $Target.NodeAddress
}
Connect-IscsiTarget @CHT
This snippet connects
FS1
to the iSCSI target held on
SRV2
. The output, in Figure 6.16, shows details of the iSCSI connection.
Figure 6.16: Connecting to the
SalesTarget
You can see the iSCSI initiator and target address in the figure. Before proceeding, it is useful to check to ensure you have set up the target (and initiator) correctly.
Although it's not shown in these snippets, another feature you could add to the solution is Multipath IO (MPIO), which enables you to create multiple paths between your file server and the underlying iSCSI file server. For more details on MPIO, see whatis.techtarget.com/definition/Multipath-I-O-MPIO. And for more detail on using MPIO with the Windows iSCSI initiator, see petri.com/using-mpio-windows-server-iscsi-initiator.
Now that you have connected to the iSCSI target on
SRV2
, you can use
Get‐Disk
to view the contents of the iSCSI virtual disk.
# 11. Viewing iSCSI disk from FS1 on SRV2
$ISD = Get-Disk |
Where-Object BusType -eq 'iscsi'
$ISD |
Format-Table -AutoSize
Figure 6.17 shows the output of this snippet.
Figure 6.17: Viewing the disk
As you can see in the figure, this disk is raw (no partitions have yet been created on it). Additionally, the disk does not have a filesystem and is not online. This example demonstrates that the iSCSI virtual disk, exposed as an iSCSI target on
SRV2
, is seen in
FS1
as just another disk.
You use the
Set‐Disk
command to ensure both that the disk is online and that it is read/write.
# 12. Turning disk online and make R/W
$ISD |
Set-Disk -IsOffline $False
$ISD |
Set-Disk -IsReadOnly $False
This snippet sets the disk to be online and ensures that it is read/write. To verify this, you could repeat the previous step to view the properties of this disk.
The disk, when viewed from
FS1
, is now online and partitioned but has no volumes created. That is simple to do, as shown here:
# 13. Formatting the iSCSI volume on FS1
$NVHT = @{
FriendlyName = 'SalesData'
FileSystem = 'NTFS'
DriveLetter = 'S'
}
$ISD |
New-Volume @NVHT
You can see the output from this snippet in Figure 6.18.
Figure 6.18: Creating an
S:
drive
With this step completed, the iSCSI disk on
SRV2
is now formatted and available within
FS1
. From
FS1
, the iSCSI disk appears to be another disk on which you can format and create volumes. This disk is small: 500GB with a usable capacity of 467.78GB. In production, you would probably create much larger volumes.
With the steps so far, you have set up an
S:
drive on
FS1
, which is the iSCSI disk that you previously set up on
SRV2
. You can use this as if it were a locally attached disk on
FS1
.
# 14. Using the iSCSI drive on FS1
New-Item -Path S:\ -Name SalesData -ItemType Directory |
Out-Null
'Testing iSCSI 1-2-3' | Out-File -FilePath S:\SalesData\Test.Txt
Get-ChildItem -Path S:\SalesData
You can see in Figure 6.19 that the
S:
volume is available, and you can use it just as if it were locally attached.
In the snippet, you created a file on the
S:
drive and then used
Get‐ChildItem
to verify that the file now exists on the
S:
volume.
This completes the task of creating an iSCSI disk on
SRV2
and using it from
FS1
. This section created a single iSCSI client,
FS1
, for the iSCSI target held on
SRV2
.
Figure 6.19: Using the iSCSI
S:
drive
In this section, you leverage the iSCSI disk you created in “Creating and Using an iSCSI Target” and create a clustered scale‐out file server (SOFS) using both
FS1
and
FS2
. You also create a continuously available SMB3 file share on the SOFS cluster.
Once you have both
FS1
and
FS2
set up and are able to view the iSCSI target on SRV2, you can cluster the two hosts and create the SOFS based on the cluster.
The Scale‐Out File Server is based on Microsoft's Failover Clustering technology. Failover Clustering was first introduced with Windows NT4, where there was a very restricted Hardware Compatibility List (HCL). With the later versions, Microsoft created a cluster validation wizard to check the servers. As long as the cluster validation test is successful, the cluster is eligible for Microsoft support, and for large organizations, clustering and support are both important.
For some background on the SOFS feature, see docs.microsoft.com/en-us/windows-server/failover-clustering/sofs-overview.
Once you create a failover cluster, you can build an SOFS on top. The SOFS relies on the clustering technology to deliver highly available and high‐performance storage across your network.
This example uses three systems:
Reskit\Administrator
.
DC1
, online.To set up the failover cluster, you also need to create the iSCSI environment on
FS2
. The setup for
FS2
is similar to the setup of
FS1
carried out in “Creating and Using an iSCSI Target.”
To set up the cluster, you need to configure
FS2
to have access to the iSCSI shared disk. You set up
FS2
as follows:
# 1. Setup FS2 to support ISCSI
# Adjust the iSCSI service to auto start, then start the service.
Set-Service MSiSCSI -StartupType 'Automatic'
Start-Service MSiSCSI
This snippet ensures that the iSCSI service on
FS2
is started and configured to restart this service automatically whenever you restart the host.
With the iSCSI service started, configure the service on
FS2
, as follows:
# 2. Setup iSCSI portal to SRV2
$PHT = @{
TargetPortalAddress = 'SRV2.Reskit.Org'
TargetPortalPortNumber = 3260
}
New-IscsiTargetPortal @PHT
# Get the SalesTarget on portal
$Target = Get-IscsiTarget
# Connect to the target on SRV2
$CHT = @{
TargetPortalAddress = 'SRV2.Reskit.Org'
NodeAddress = $Target.NodeAddress
}
Connect-IscsiTarget @CHT
$ISD = Get-Disk |
Where-Object BusType -eq 'iscsi'
$ISD |
Set-Disk -IsOffline $False
$ISD |
Set-Disk -Isreadonly $False
You can see the output of these commands in Figure 6.20. You may see different values for
TargetSideIdentifier
, but that is not significant.
Figure 6.20: Setting up the iSCSI portal
These commands establish the iSCSI portal for
FS2
and connect to the iSCSI disk, similarly to how you configured
FS1
in the “Creating and Using an iSCSI Target” section earlier in this chapter.
Note that you had to use
Set‐Disk
twice to ensure that the disk is both read/write and online. The
Set‐Disk
command does not allow you to specify both parameters at the same time.
In “Adding File Server Features to FS1,” you added key file server‐related Windows features to
FS1
. Because you are creating a clustered file server, you add those same features to
FS2
, like this:
# 3. Add File Server features to FS2
Import-Module -Name ServerManager -WarningAction SilentlyContinue
$Features = 'FileAndStorage-Services',
'File-Services',
'FS-FileServer'
Install-WindowsFeature -Name $Features -IncludeManagementTools |
Out-Null
When installing Windows Server, the setup process by default does not install the Failover Clustering feature. You use the Server Manager module's
Install‐WindowsFeature
command to install the feature on both
FS1
and
FS2
. Run this on
FS2
.
# 4. Adding clustering features to FS1/FS2
Import-Module -Name ServerManager -WarningAction SilentlyContinue
$IHT = @{
Name = 'Failover-Clustering'
IncludeManagementTools = $true
}
Install-WindowsFeature -ComputerName FS2 @IHT
Install-WindowsFeature -ComputerName FS1 @IHT
Figure 6.21 shows the output from this snippet.
Figure 6.21: Installing clustering on
FS1
and
FS2
As you can see from the output, you need to restart Windows after you install the Failover Clustering feature.
Adding the Failover Clustering feature to both hosts requires a reboot to complete the installation.
# 5. Restarting both FS1, FS2
Restart-Computer -ComputerName FS1 -Force
Restart-Computer -ComputerName FS2 -Force
Microsoft added failover clustering with Windows NT4 but with a restricted set of supported hardware. Later, Microsoft created a cluster test tool—if your cluster passes the test, Microsoft can support it. Full Microsoft support is essential for many larger organizations that run mission‐critical workloads based on failover clustering.
Before you create a failover cluster (using
FS1
and
FS2
), you test the cluster members to determine whether the systems can be clustered in a supported fashion, using the
Test‐Cluster
command.
# 6. Testing the Cluster Nodes
Import-Module -Name FailoverClusters -WarningAction SilentlyContinue
$CheckOutput = 'C:\Foo\Clustercheck'
Test-Cluster -Node FS1, FS2 -ReportName $CheckOutput | Out-Null
This snippet, which produces no console output, tests the cluster. When setting up a cluster, you should ensure that the test is successful. If it is not, then you need to do some additional work to overcome any deficiencies that the
Test‐Cluster
output shows.
With this snippet, you import the
FailoverClusters
module manually. This module is not supported natively in PowerShell 7, but its commands work well using the Windows PowerShell compatibility feature described in Chapter 2.
Once the tests are complete, you can view the results generated by the
Test‐Cluster
command.
# 7. View the cluster validation test results
$COFILE = "$CheckOutput.htm"
Invoke-Item -Path $COFILE
You can view some of the output from this command in Figure 6.22.
Figure 6.22: Viewing cluster test results
The cluster validation report is long—there are a large number of tests that Microsoft has specified are necessary in order to support the cluster. So long as the tests are all successful, you can proceed to create the cluster.
This snippet also omits any explicit storage testing. Depending on the disks you plan to add to your cluster, disk testing could mean some downtime, particularly if you are updating an existing cluster. To create the scale‐out file server, you do not need to do any testing, so you can skip this testing.
For more information about cluster validation, see docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/jj134244
(v=ws.11)
.
As you can see, in this case, there were three tests carried out by the command, and all tests were successful. This means you are ready to create the cluster.
One issue that arises a lot when creating clusters is that the two nodes may not have the same set of updates. The
Test‐Cluster
command makes the check to ensure that the hosts are up‐to‐date. You may find the
PSWindowsUpdate
module (available from the PowerShell Gallery; see www.powershellgallery.com/packages?q=pswindowsupdate) might be useful to help you to ensure that both
FS1
and
FS2
are up‐to‐date before proceeding to create a cluster.
A tip to simplify successful testing is to remove the Windows Defender service. Removing this service minimizes the issue of incompatible updates occurring while you are creating the cluster. If you do remove this service while you are creating the SOFS, be sure to re‐enable it after you finish installing the cluster. As an alternative, you could use the
‐Ignore
parameter and ignore the
Validate Software Update Levels
test, but that might miss other important updates.
Once the cluster check is completed successfully, you can create the actual cluster, using the
New‐Cluster
command.
# 8. Creating the Cluster
$NCHT = @{
Name = 'FS'
Node = 'FS1.Reskit.Org', 'FS2.Reskit.Org' StaticAddress = '10.10.10.100'
}
New-Cluster @NCHT | Out-Null
This snippet creates a cluster with a cluster name of
FS
and a cluster address of 10.10.10.100.
Windows Failover Clustering provides high availability for your workloads. Cluster resources are highly available as long as the host is available.
You can configure a quorum witness to avoid issues that can arise with multiple nodes. To understand more about failover clustering and quorums, see docs.microsoft.com
/windows‐server/storage/storage‐spaces/understand‐quorum
.
One approach is for you to configure your cluster to have a file share witness. To do this, you configure a file share witness, and to do that, you must first create a new file share on
DC1
, as follows:
# 9. Configure a share on DC1 to act as quorum
$SBDC1 = {
New-Item -Path C:\Quorum -ItemType Directory
New-SmbShare -Name Quorum -Path C:\Quorum -FullAccess Everyone
}
Invoke-Command -ComputerName DC1 -ScriptBlock $SBDC1 | Out-Null
With the share created on DC1, you can configure the cluster to use the file share as the quorum witness, as follows:
# 10. Set the cluster Witness
Set-ClusterQuorum -NodeAndFileShareMajority \\DC1\quorum | Out-Null
If and when you reboot both servers, it is useful to ensure that the iSCSI disks are connected to both cluster nodes, like this:
# 11. Ensuring iSCSI disks are connected
$SB = {
Get-ISCSITarget |
Connect-IscsiTarget -ErrorAction SilentlyContinue
}
Invoke-Command -ComputerName FS1 -ScriptBlock $SB
Invoke-Command -ComputerName FS2 -ScriptBlock $SB
Note that this snippet uses the
Invoke‐Command
cmdlet. By default, the script block is executed using a Windows PowerShell 5.1 remoting endpoint. This works well for commands in the iSCSI module, since this module is not supported natively by PowerShell 7.
Now you can add the iSCSI disk to the failover cluster.
# 12. Adding the iSCSI disk to the cluster
Get-Disk |
Where-Object BusType -eq 'iSCSI'|
Add-ClusterDisk
You can see the results of adding this iSCSI disk to your cluster in Figure 6.23.
Figure 6.23: Adding an iSCSI disk to the cluster
For both nodes in the cluster to share data in the iSCSI disk, you must move the disk into the cluster shared volume (CSV), using the
Add‐ClusterSharedVolume
command.
# 13. Move disk into CSV
Add-ClusterSharedVolume -Name 'Cluster Disk 1'
You can see the result of this snippet in Figure 6.24.
Figure 6.24: Adding the new disk to the CSV
Once you add the disk to the CSV, the iSCSI volume (which is stored physically on
SRV2
) is available to the cluster, and you can use it from both nodes. The CSV is, in effect, a filesystem that coordinates I/O from any cluster member.
To create a clustered scale‐out file server, you need to add the Cluster Scale‐Out File Server role on
FS2
.
# 14. Add SOFS role to Cluster
Import-Module -Name ServerManager -WarningAction SilentlyContinue
Add-WindowsFeature File-Services -IncludeManagementTools | Out-Null
Add-ClusterScaleOutFileServerRole -Cluster RKFS
This snippet ensures that the
File‐Services
feature is created and then adds the SOFS role to the FS cluster.
With the cluster set up and the iSCSI volume mounted in both nodes, you can use the storage as if it were local.
# 15. Create a folder and give Sales Access to the folder
Install-Module -Name NTFSSecurity -Force | Out-Null
$HvFolder = 'C:\ClusterStorage\Volume1\HVData'
New-Item -Path $HvFolder -ItemType Directory |
Out-Null
$ACCHT = @{
Path = $HvFolder
Account = 'Reskit\Sales'
AccessRights = 'FullControl'
}
Add-NTFSAccess @ACCHT
Note that you created the Sales domain security group in “Managing NTFS Permissions” in Chapter 5.
With the SOFS set up, you can add a continuously available share.
# 16. Adding a Continuously Available share to the entire cluster
$SMBSHT2 = @{
Name = 'SalesHV'
Path = $HvFolder
Description = 'Sales HV (CA)'
FullAccess = 'Reskit\Sales'
ContinuouslyAvailable = $true
}
New-SmbShare @SMBSHT2
Figure 6.25 shows the output of this snippet.
Figure 6.25: Adding a continuously available share
With your SOFS set up and sharing a folder (held on the iSCSI target on
SRV2
), you can view the shares available from
FS2
.
# 17. View Shares on FS1 and FS2
Get-SmbShare # FOR FS1
Invoke-Command -ComputerName FS2 -ScriptBlock {Get-SmbShare}
You can see the output of this command in Figure 6.26.
Figure 6.26: Viewing shares
As you can see in the output, the
SalesHV
share is set up on the cluster. You could view the shares using
Net View \\FSSOFS
, which would return the
SalesHV
share.
Now that you have the SOFS set up, you can use the Cluster Manager MMC console to pause the active node, and the file server continues to work. But note that the iSCSI target you created on
SRV2
is a single point of failure (SPOF). To avoid SPOF issues, you could ensure that the drive created on
SRV2
is based on hardware (or software) RAID.
In general, the iSCSI target is not a widely used feature of Windows Server 2019. However, many smaller organizations deploy low‐cost third‐party SANs that provide both fault tolerance in the box and an iSCSI interface.
Whether or not you are using a Windows Server iSCSI target, you may use the Windows iSCSI initiator to access an organization's SAN if it offers an iSCSI interface.
An example of such a SAN is from Synology; see www.synology.com/en-global/knowledgebase/DSM/tutorial/Virtualization/How_to_use_iSCSI_Targets_on_a_Windows_Server for details on how you set up iSCSI on this device. There are many other vendors that can offer lower‐cost networked storage based on iSCSI.
In this chapter, you have examined setting up and configuring an SMB file server and how you can deploy an SOFS. The SOFS made use of an iSCSI target, which you also set up. Although in this case the actual target that you built on
SRV2
was not fault tolerant, you could add a degree of fault tolerance (for example, by using RAID 5 on the underlying iSCSI partition in
SRV2
).
The use of the Windows‐based iSCSI target in this chapter shows how easy it is to share data using SMB‐based file services within Windows, controlled by PowerShell 7.