Chapter 10
Reporting

Reporting is a process of obtaining information and presenting it to an intended audience. Since audiences vary, the content and layout of a report changes. Senior management, for example, might like to see a dashboard‐like report, showing just key status items. The IT group could benefit from performance graphs or System Diagnostics Reports to enable them to spot issues as early as possible. And the team supporting a new virtualization project might want high‐level views of both stability and resource utilization.

PowerShell along with Windows applications provides IT pros with a variety of reporting options. You can use many of the PowerShell commands to retrieve information and report on it. You can use the AD cmdlets, for example, to retrieve information about who is in a high‐ security group to ensure that group membership is appropriate. The Windows Performance Logs and Alerts (PLA) feature logs various performance counters, enabling you to review the performance of the system or some application.

Some applications or Windows features, such as File System Resource Manager (FSRM), provide reports you can request using PowerShell.

In this chapter you look at the following:

This chapter shows you several approaches you can take when creating reports; all of the reports shown in this chapter demonstrate ways you can use PowerShell to create rich and useful reports.

Systems Used in This Chapter

In this chapter, you use PowerShell 7 to report on the activities of many servers. The scripts in this chapter make use of the following servers:

  • DC1.Reskit.Org: This is a DC in the Reskit.Org domain. You have used this DC throughout this book.
  • SRV1.Reskit.Org: You implemented FSRM on this host in “Managing Filestore Quotas” in Chapter 5.
  • PSRV.Reskit.Org: You deployed and managed a print server on this system in Chapter 7.
  • HV1.Reskit.Org: This is one of two Hyper‐V hosts you used in Chapter 8, “Managing Hyper‐V.”

You can see a diagram of these hosts in Figure 10.1.

image

Figure 10.1: Systems used in this chapter

Reporting on AD Users and Computers

All organizations need to secure user and computer accounts. If user accounts are compromised, an attacker could use the account credentials to enter and damage the organization. Likewise, AD computers need to be secured. As a basic security feature, a computer that has not been used in more than 30 days loses its machine AD password, leaving the computer (and any user attempting to use it) unable to log on to the domain.

In this section's example, you create a report of potential security issues related to user accounts. Specifically, you report the following:

  • Each user's basic information, including a count of bad password attempts
  • Members of key high‐security AD groups (Enterprise Admins, for example)
  • Computers that have not been used in a long time and may be lost or stolen

To create the AD user and computer report, you use PowerShell's powerful string‐handling capability. You begin by defining an empty report body. Then you build the report, section by section, and add the output to the report body. Finally, you output the report to the console, save it to a file, or send the report in email.

Before You Start

You run the code in this section on DC1.Reskit.Org, the main domain controller in the Reskit.Org forest. You have used this host throughout this book, and it should have a number of user and computer accounts created. However, in the various code snippets you have run, many of the AD objects were created but not used; as a result, values such as bad password count or last logon date might not be as well‐populated as in a real‐life AD domain. You may want to generate some activity on some of the user accounts. For example, log in to one of the hosts in the Reskit.Org VM farm using one of the user accounts.

Defining a Function to Retrieve User Accounts

As a first step to creating an AD user and computer report, you define a function, Get‐ReskitUser, to retrieve all user accounts in the AD. To simplify the report, this function creates custom PowerShell objects for each AD user account. Each object contains only the properties needed for the report.

# 1. Define a function Get-ReskitUser
#    The function returns objects related to users in reskit.org
Function Get-ReskitUser {
    # Get PDC Emulator DC
    $PrimaryDC = Get-ADDomainController -Discover -Service PrimaryDC
    # Get Users
    $P       = "DisplayName","Office","LastLogonDate","BadPWDCount"
    $ADUsers = Get-ADUser -Filter * -Properties $P -Server $PrimaryDC
    # Iterate through them and create $Userinfo hash table:
    Foreach ($ADUser in $ADUsers) {
        # Create a userinfo HT
        $UserInfo = [Ordered] @{}
        $UserInfo.SamAccountName = $ADUser.SamAccountName
        $Userinfo.DisplayName    = $ADUser.DisplayName
        $UserInfo.Office         = $ADUser.Office
        $Userinfo.Enabled        = $ADUser.Enabled
        $Userinfo.LastLogonDate  = $ADUser.LastLogonDate
        $UserInfo.BadPWDCount    = $ADUser.BadPwdCount
        New-Object -TypeName PSObject -Property $UserInfo
       }
} # end of function

This function is an example of one you might define for your environment. You could store this function in a custom PowerShell module for use in reporting. You can extend this simple function to obtain other information that might be relevant when you generate reports. You might be able to get information from other systems and applications that you can add to the output of this function.

Getting Reskit Users

Having defined the Get‐ReskitUser function, you use it to retrieve summary information about AD users.

# 2. Get the users
$RKUsers = Get-ReskitUser

Depending on the size of your AD, you might want to create reports grouped by organizational unit (OU). For example, you might create an IT Team Users report that just reported on the users in the IT Team OU. In that case, you could extend the function to take an OU name as a parameter and have the function return users in that OU only.

Building the Report Header

You begin to build the report by creating the report header.

# 3. Build the report header
$RKReport = '' # Define initial report variable
$RKReport += "*** Reskit.Org AD Report`n"
$RKReport += "*** Generated [$(Get-Date)]`n"
$RKReport += "*******************************`n`n"

These commands create a variable named $RKReport that you use to hold the report. You add to that variable as you build the report.

Reporting on Disabled Users

An AD account that is disabled is one that a user cannot use to log in or to access resources. There are a variety of reasons you might disable an AD user account, such as when a user has left the company or is on long‐term leave, or an account could have been accidentally disabled. You can filter the disabled accounts via Where‐Object and add that information to the report.

# 4. Report on Disabled users
$RKReport += "*** Disabled Users`n"
$RKReport += $RKUsers |
    Where-Object {$_.Enabled -ne $true} |
        Format-Table -Property SamAccountName, DisplayName |
            Out-String

Reporting on Unused Accounts

In most cases, a disabled account is probably not much of a security risk, although it does potentially consume resources. User accounts that have not been recently used represent more of a risk. You can use the LastLogonDate property in the AD User object to determine the last logon date.

# 5. Report users who have not recently logged on
$OneWeekAgo = (Get-Date).AddDays(-7)
$RKReport += "`n*** Users Not logged in since $OneWeekAgo`n"
$RKReport += $RKUsers |
    Where-Object {$_.Enabled -and $_.LastLogonDate -le $OneWeekAgo} |
        Sort-Object -Property LastLogonDate |
            Format-Table -Property SamAccountName,LastLogonDate |
                Out-String

One issue with this section of code is that Active Directory does not replicate the last logon date property between domain controllers. Thus, the last logon date for users from DC1 is not accurate for users authenticated from other hosts. If you have a large number of DCs, this part of your report would be improved by calculating the last logon date across all servers. You could extend the Get‐ReskitUser function to contact each DC in the domain to retrieve the accurate last logon date.

Reporting on Invalid Password Attempts

When you log on to Windows, you need to provide credentials; if those fail, Windows does not log you. An AD user account contains a count of bad password attempts, which you can use to report on users who show a high number of failed attempts.

# 6. Users with high invalid password attempts
#
$RKReport += "`n*** High Number of Bad Password Attempts`n"
$RKReport += $RKUsers | Where-Object BadPWDCount -ge 5 |
  Format-Table -Property SamAccountName, BadPWDCount |
    Out-String

If an account has a high number of bad password attempts, it could indicate an attacker attempting to guess a user's password. Of course, there may be other causes, which you can address with user training.

Determining Privileged Users

Several AD groups have high privileges. In particular, adding a user to the Enterprise Admins group gives them significant power throughout your domain. So it is useful to ensure that these high‐privilege groups contain only those users who need the permissions. Three specific AD groups you should monitor are Enterprise Admins, Domain Admins, and Schema Admins. In the report, you create a section containing details of group membership for these groups as follows:

# 7. Query the Enterprise Admins/Domain Admins/Schema Admins
#    groups for members and add to the $PUsers array
# Get Enterprise Admins group members
$RKReport += "`n*** Privileged  User Report`n"
$PUsers = @()
$Members =
  Get-ADGroupMember -Identity 'Enterprise Admins' -Recursive |
    Sort-Object -Property Name
$PUsers += foreach ($Member in $Members) {
  Get-ADUser -Identity $Member.SID -Properties * |
    Select-Object -Property Name,
                  @{Name='Group';Expression={'Enterprise Admins'}},
                  WhenCreated,LastLogonDate
}
# Get Domain Admins group members
$Members =
  Get-ADGroupMember -Identity 'Domain Admins' -Recursive |
    Sort-Object -Property Name
$PUsers += Foreach ($Member in $Members) {
  Get-ADUser -Identity $Member.SID -Properties * |
    Select-Object -Property Name,
                  @{Name='Group';Expression={'Domain Admins'}},
                  WhenCreated, LastLogondate
}
# Get Schema Admins members
$Members =
  Get-ADGroupMember -Identity 'Schema Admins' -Recursive |
    Sort-Object -Property Name
$PUsers += Foreach ($Member in $Members) {
  Get-ADUser -Identity $Member.SID -Properties * |
    Select-Object -Property Name,
                  @{Name='Group';Expression={'Schema Admins'}}, 
                  WhenCreated, LastLogonDate
}

Depending on your organization, you may have other high‐privilege groups you can report on. You can use the code here as a template.

Adding Privileged Users to the Report

Once you have identified the set of highly privileged accounts, you add them to the report.

# 8. Add the special users to the report
$RKReport += $PUsers | Out-String

This completes the report.

You built this report using the simple technique of compiling it as a set of concatenated strings. For simple reports you might create in your environment, adapt the approaches shown here. Depending on how much reporting you need to carry out, you can always refactor the code snippets in this section to create functions to each report section and use these functions in other reports.

Displaying the Report

You can view the report like this:

# 9. Display the report
$RKReport

You can see the report in Figure 10.2.

image

Figure 10.2: AD user and computer report

The details you see in the report you generate may differ from this figure. Depending on what you have done—for example, logging on as different users (successfully and unsuccessfully)—you may see different output.

Managing Filesystem Reporting

File Server Resource Manager is a Windows feature that provides tools to help you manage a file server. You examined the installation and use of FSRM in Chapter 5.

FSRM includes the ability to generate a variety of reports related to files stored on a file server. It can create incident reports (as a result of a quota threshold, for example), scheduled reports (run at some specific time), or on‐demand reports (interactive reports).

In Windows Server 2019, FSRM supports 10 report types.

  • Duplicate Files: Identifies files that appear to be duplicates based on size and last modification time
  • Files by File Group: Lists files belonging to specified FSRM file groups, such as backup files or image files
  • Files by Owner: Lists files by owner, where you can specify all or selected owners
  • Files by Property: Lists files based on the value of specified FSRM classification properties
  • Large Files: Lists files over a specified size, such as 10MB
  • Least Recently Accessed: Lists files that have not been accessed in some specified time, such as 90 days
  • Most Recently Accessed: Lists files accessed in a recent specified time period, such as the past week
  • Quota Usage: Lists any FSRM quotas whose usage exceeds a specified value
  • File Screen Audit Files: Lists any file screening audit events that occurred during a specified time period
  • Folders by Property: Lists folders based on the value of specified FSRM classification properties

FSRM provides reports in several different output formats. FSRM can produce report output in DHTML (saved with the .html extension), HTML (.htm), text (.txt), and XML (.xml). The DHTML, HTML, and text files have predefined formats, which you cannot change. The XML output type returns the same information shown in the other reports, but as XML. As an alternative to the predefined report layouts, you can use the XML to build reports better suited for your specific needs.

The documentation for FSRM reporting is sparse. There are some high‐level overview web pages and cmdlet documentation, although those pages are light on detail. They lack examples and contain no end‐to‐end advice and guidance. Additionally, there is not much up‐to‐date information on the Internet—much Internet content is old (although still useful!). But with that said, the workings are straightforward and pretty easy to get working.

Before You Start

This section uses SRV1, on which you installed and used FSRM in Chapter 5's “Managing Filestore Quotas,” and “Managing Files Screening” sections.

Creating a Storage Report

To create a new interactive storage report, you use the New‐FSRMStorageReport command.

# 1. Create a new Storage report for large files on C:\ on SRV1
$REPORT1HT = @{
  Name             = 'Large Files on SRV1'
  NameSpace        = 'C:\'
  ReportType       = 'Large'
  ReportFormat     = ('DHTML','XML')
  LargeFileMinimum = 10MB
  Interactive      = $true
  MailTo           = 'DoctorDNS@Gmail.Com'
}
New-FsrmStorageReport @REPORT1HT

This command creates a new Large Files report and produces the output you see in Figure 10.3.

This report shows all large files on the C:\ volume that are bigger than 10MB. The report also generates both a DHTML file and an XML output file and mails a copy of them to the specified email address.

New‐FSRMStorageReport is a complex command with parameters for all the options for all report types. Typically, you do not need many for any given report.

When you create an interactive storage report, FSRM runs the report (and generates requested output).

image

Figure 10.3: Creating a new FSRM report

Viewing FSRM Reports

You can view the active FSRM reports by using the following commands:

# 2. View FSRM Reports
Get-FsrmStorageReport * |
 Format-Table -Property Name, ReportType, ReportFormat, Status

You can see the output from these commands in Figure 10.4.

image

Figure 10.4: Viewing the FSRM report

FSRM does not come with much in the way of display XML, so PowerShell by default displays objects such as the storage report objects, with all properties displayed in a list. You may find it easier to display only the properties you actually need.

Once you start the report, it can take some time to finish, especially for larger file servers. You can use Get‐FSRMStorageReport, specify the report you want, and watch for the job to move from being queued, to running, then to ready (that is, finished for now and ready to be run again).

Viewing FSRM Report Output Files

Once FSRM has completed creating the report output files, you can view them. FSRM stores the generated report output for interactive reports in the C:\StorageReports\Interactive folder.

# 3. Viewing Storage Report Output
$Path = 'C:\StorageReports\Interactive'
Get-ChildItem -Path $Path

You can see the output files in Figure 10.5.

image

Figure 10.5: Viewing the FSRM report output files

In this figure, you see the two output files: the DHTML file stored with the .html extension and the XML output. The DHTML report contains a few graphics, and FSRM puts these into a subfolder.

Viewing the Large Files Report

Now that FSRM has completed the Large Files Report, you can view the report in your browser with this snippet:

# 4. View the DHTML report
$Rep = Get-ChildItem -Path $Path\*.html
Invoke-Item -Path $Rep

Figure 10.6 shows the report.

image

Figure 10.6: Viewing the FSRM report output files

Using FSRM XML Output

FSRM provides the information in the reports in the form of an XML document. You can use PowerShell to find and load the XML and then pull key information from the XML, as follows:

# 5. Extract key information from the XML
$XF   = Get-ChildItem -Path $Path\*.xml
$XML  = [XML] (Get-Content -Path $XF)
$Files = $XML.StorageReport.ReportData.Item
$Files | Where-Object Path -NotMatch '^Windows|^Program|^Users' |
  Format-Table -Property Name, Path,
               @{ Name ='Size MB'
                  Alignment = 'right'
                  Expression = {(([int]$_.size)/1mb).ToString('N2')}},
               DaysSinceLastAccessed -AutoSize

You can view my output from these commands in Figure 10.7; yours may differ as discussed earlier.

image

Figure 10.7: Using FSRM XML output

These commands retrieve the details of files larger than 10MB from the XML file and then display them nicely. While the default output of properties performed by the format cmdlets is usually good enough, you occasionally need to override default formatting. In this case, you output the size of each large file as a right‐aligned value, overriding the default format to produce a report that is easier to use.

Creating a Scheduled FSRM Report Task

FSRM also supports scheduled reports—reports that run at specified times. For example, you could create a monthly report showing each file owner and what files they own. Creating a scheduled report is a two‐step process. First, you need to create an FSRM report task that runs at the appropriate time, using the New‐FsrmScheduledTask command.

# 6. Create a monthly FSRM Task
$Date = Get-Date '04:20'
$NTHT = @{
  Time    = $Date
  Monthly = 1
}
$Task = New-FsrmScheduledTask @NTHT

This creates a scheduled task that runs monthly, in this example at 4:20 a.m. on the first day of every month.

Creating the Scheduled Report

The second and final step in creating a new scheduled report is to call New‐FSRMStorageReport, passing it the details of the report that FSRM is to generate, along with the schedule created in the previous step.

# 7. Create a new FSRM monthly report
$ReportName = 'Monthly-Files By Owner'
$REPORT2HT = @{
  Name             = $ReportName
  Namespace        = 'C:\'
  Schedule         = $Task
  ReportType       = 'FilesByOwner'
  MailTo           = 'DoctorDNS@Gmail.Com'
}
New-FsrmStorageReport @REPORT2HT | Out-Null

These commands create a new FSRM storage report, which you can view using the Get‐FSRMStorageReport cmdlet.

Viewing the Report Scheduled Task

Creating the FSRM scheduled report also created a Windows schedule task, and you can view the task using Get‐ScheduledTask.

# 8. Get details of the scheduled task
Get-ScheduledTask |
  Where-Object TaskName -match $ReportName |
    Format-Table -AutoSize

You can see the details of the scheduled task in Figure 10.8.

image

Figure 10.8: Viewing a scheduled task

The scheduled task runs PowerShell at the appointed hour and has PowerShell run the command.

Running the Report Interactively

When you create a new FSRM scheduled report, you may need to wait a while to ensure that the output is what you want and need. It's usually a good idea to run the report immediately and view the output. You can do this by starting, and then viewing, the scheduled task, as shown here:

# 9. Run the task interactively
Get-ScheduledTask |
  Where-Object TaskName -match $ReportName |
    Start-ScheduledTask
Get-ScheduledTask -TaskName '*Monthly*'

This snippet starts and then views the scheduled task associated with the scheduled report, as you can see in the output in Figure 10.9.

image

Figure 10.9: Running the report interactively

Although you are running the report immediately, FSRM sends output details to the C:\StorageReports\Scheduled folder.

Viewing the Report

Once the scheduled task has completed, it can take FSRM a bit of time to make the output available. Once the output has been stored, you can view the HTML file using Invoke‐Item.

# 10. View the report
$Path = 'C:\StorageReports\Scheduled'
$Rep = Get-ChildItem -Path $path\*.html
Invoke-Item -Path $Rep

You can see the report in Figure 10.10.

image

Figure 10.10: Viewing the report

Removing the Reports and Scheduled Task

To remove the FSRM reports and the FSRM schedule reporting task, you can do the following:

# 11. Remove the objects
#  Remove the scheduled task
Get-ScheduledTask |
  Where-Object TaskName -match $ReportName |
    Unregister-ScheduledTask -Confirm:$False
Remove-FsrmStorageReport $ReportName -Confirm:$False
Get-Childitem C:\StorageReports\Interactive,
              C:\StorageReports\Scheduled |
  Remove-Item -Force -Recurse

These commands first remove (unregister) the scheduled task, then remove the FSRM scheduled task and the storage report, and also remove any FSRM report output files in the two storage report output folders.

Collecting Performance Information Using PLA

There are several ways you can report on the performance of your Windows systems. You can use the Get‐Counter command to get individual performance counters. You can also retrieve performance information using WMI ( Get‐CimInstance). There are around 400 WMI performance classes in Windows Server 2019 that you can retrieve and use in your reports. A final way is to use the PLA subsystem built into Windows.

Using Get‐Counter or WMI to retrieve counter information is slow, and using those tools does not scale well. While using these cmdlets is great for retrieving a few performance counters to give you an up‐to‐the minute look at some aspect of a Windows host, these mechanisms are not well suited for long‐term performance data collection. PLA is an excellent method for continuous performance reporting.

PLA enables you to create a data collector set, which defines the specific performance counters whose details you want to retrieve. You can then set a schedule for starting the data collection (say 6:00 a.m. tomorrow) and for how many days to collect performance information. Finally, you can start the collector set.

Once the collector set is running, Windows retrieves the counter value based on the sample interval you specified and stores the information in a file. You have several options as to the output type, such as binary logs, CSV, and so on.

You can analyze performance data using a number of different tools and performance file types. Setting up a data counter set to log to a binary log file allows you to use perfmon.exe to view the counter data. You can create a counter set that outputs to a CSV file, which enables you to use other tools to analyze and report on the performance information.

There are no PowerShell cmdlets for setting up and using PLA to collect performance data. Instead, you use COM and script the related COM objects.

In this section, you set up and start two PLA data collector sets. The first you set up to deliver the information in a binary log file, and the second you use to deliver the data using a CSV file. Setting up the two collectors is similar—you just specify a different log file output and a different collector set name.

Before You Start

You run the PowerShell code for this section on SRV1.

Creating a Data Collector

You create a data collector by using the New‐Object command and then populate key attributes of the collector, as shown here:

# 1. Create and populate a new collector
$Name = 'SRV1 Collector Set'
$SRV1CS1 = New-Object -COM Pla.DataCollectorSet
$SRV1CS1.DisplayName                = $Name
$SRV1CS1.Duration                   = 12*3600  
$SRV1CS1.SubdirectoryFormat         = 1
$SRV1CS1.SubdirectoryFormatPattern  = 'yyyy\-MM'
$JPHT = @{
  Path      = "$Env:SystemDrive"
  ChildPath = "\PerfLogs\Admin\$Name"
}
$SRV1CS1.RootPath = Join-Path @JPHT
$SRV1Collector1 = $SRV1CS1.DataCollectors.CreateDataCollector(0)
$SRV1Collector1.FileName              = "$Name_"
$SRV1Collector1.FileNameFormat        = 1
$SRV1Collector1.FileNameFormatPattern = "\-MM\-dd"
$SRV1Collector1.SampleInterval        = 15
$SRV1Collector1.LogFileFormat         = 3 # BLG separated
$SRV1Collector1.LogAppend             = $True

These commands create a new PLA data collector set that collects data for a total of 12 hours and stores data in binary log (BLG) format.

Defining Counters

Now that you have created the data collector set, you define the specific performance counters you want to capture, as follows:

# 2. Define counters of interest
$Counters1 = @(
    '\Memory\Pages/sec',
    '\Memory\Available MBytes',
    '\Processor(_Total)\% Processor Time',
    '\PhysicalDisk(_Total)\% Disk Time',
    '\PhysicalDisk(_Total)\Disk Transfers/sec',
    '\PhysicalDisk(_Total)\Avg. Disk Queue Length'
)

Adding the Performance Counters to the Collector Set

You can update the data collector set with the specific counters you want to capture.

# 3. Add the counters to the collector
$SRV1Collector1.PerformanceCounters = $Counters1

Creating a Schedule

You can run a data collector set—that is, capture performance information—based on a schedule you set up and add to the set.

# 4. Create a schedule — start tomorrow morning at 06:00
$StartDate = Get-Date -Day $((Get-Date).Day+1) -Hour 6 -Minute 0 -Second 0
$Schedule = $SRV1CS1.Schedules.CreateSchedule()
$Schedule.Days = 7
$Schedule.StartDate = $StartDate
$Schedule.StartTime = $StartDate

Creating and Starting the Data Collector Set

You can add the schedule to the data collector set and then start it, as follows:

# 5. Create, add and start the collector set
try
{
    $SRV1CS1.Schedules.Add($Schedule)
    $SRV1CS1.DataCollectors.Add($SRV1Collector1)
    $SRV1CS1.Commit("$Name", $null, 0x0003) | Out-Null
    $SRV1CS1.Start($false)
}
catch
{
    Write-Host "Exception Caught: " $_.Exception -ForegroundColor Red
    return
}

These commands add the schedule to the collector set and then add this collector set to the system (and commit the change). You then start the collector set.

Once you start the data collector set, you need to wait a few days to see the data being logged.

Creating a Second Data Collector Set

You can create a second data collector set that logs data to a CSV file rather than a BLG file. CSV files allow you to parse and report on performance data, as you see later in this chapter.

This second data collector set is similar to the first one, except that the output type is CSV.

# 6. Create a second collector that collects to a CSV file
$Name = 'SRV1 Collector Set2 (CSV)'
$SRV1CS2 = New-Object -COM Pla.DataCollectorSet
$SRV1CS2.DisplayName                = $Name
$SRV1CS2.Duration                   = 12*3600  
$SRV1CS2.SubdirectoryFormat         = 1
$SRV1CS2.SubdirectoryFormatPattern  = 'yyyy\-MM'
$JPHT = @{
  Path      = "$Env:SystemDrive"
  ChildPath = "\PerfLogs\Admin\$Name"
}
$SRV1CS2.RootPath = Join-Path @JPHT
$SRV1Collector2 = $SRV1CS2.DataCollectors.CreateDataCollector(0)
$SRV1Collector2.FileName              = "$Name_"
$SRV1Collector2.FileNameFormat        = 1
$SRV1Collector2.FileNameFormatPattern = "\-MM\-dd"
$SRV1Collector2.SampleInterval        = 15
$SRV1Collector2.LogFileFormat         = 0 # CSV format
$SRV1Collector2.LogAppend             = $True
# Define counters of interest
$Counters2 = @(
    '\Memory\Pages/sec',
    '\Memory\Available MBytes',
    '\Processor(_Total)\% Processor Time',
    '\PhysicalDisk(_Total)\% Disk Time',
    '\PhysicalDisk(_Total)\Disk Transfers/sec',
    '\PhysicalDisk(_Total)\Avg. Disk Queue Length'
)
#  Add the counters to the collector
$SRV1Collector2.PerformanceCounters = $Counters2
# Create a schedule — start tomorrow morning at 06:00
$StartDate = Get-Date -Day $((Get-Date).Day+1) -Hour 6 -Minute 0 -Second 0
$Schedule2 = $SRV1CS2.Schedules.CreateSchedule()
$Schedule2.Days = 7
$Schedule2.StartDate = $StartDate
$Schedule2.StartTime = $StartDate
# Create, add and start the collector set
try
{
    $SRV1CS2.Schedules.Add($Schedule2)
    $SRV1CS2.DataCollectors.Add($SRV1Collector2)
    $SRV1CS2.Commit("$Name", $null, 0x0003) | Out-Null
    $SRV1CS2.Start($false)
}
catch
{
    Write-Host "Exception Caught: " $_.Exception -ForegroundColor Red
    return
}

These commands build and start a second data collector set.

Viewing the Collector Sets

An easy way to view the data collector sets is to use the Windows Performance Monitor ( perfmon.exe). You can run perfmon.exe either from PowerShell or by clicking the Windows Start button and typing perfmon.

When you open perfmon.exe, you can expand the Data Collector Sets node in the left pane and then expand the User Defined node to see the two data collector sets, as shown in Figure 10.11.

image

Figure 10.11: Viewing data collector sets with perfmon.exe

As you can see in the figure, both collector sets are running. Each collector set is logging values of the requested performance counters to the folders you specified when creating the data collector sets.

Reporting on PLA Performance Data

As noted in “Collecting Performance Information Using PLA,” Windows can write performance counter data to files in a different format. You then use different techniques to leverage the different file formats. In this section you use the data collected earlier to report on performance data from SRV1.

Before You Start

You use SRV1 for this section. You need to have created and run the data collector sets on this host, as in “Collecting Performance Information Using PLA.”

Importing the Performance Counters

In “Collecting Performance Information Using PLA” you set up a data collector that writes performance counter information to a CSV file. The files are stored in C:\PerfLogs\Admin. You can discover the specific files with this code:

# 1. Import the CSV file of counters
$Folder = 'C:\PerfLogs\Admin'
$File = Get-ChildItem -Path $Folder\*.csv -Recurse

This finds the CSV file of performance counters. Depending on your monitoring, you may have more than one CSV file of performance measurements; thus, you might need to be more specific as to the CSV file of performance counter data.

Importing Performance Counter Data

You can retrieve the performance counter information by using the Import‐CSV command.

# 2. Import the performance counters.
$Counters = Import-Csv $File.FullName
"$($Counters.Count) measurements in $($File.FullName)"

You can see the output from this command in Figure 10.12.

image

Figure 10.12: Counting available performance counters

Fixing the Data Collection Problem

A long‐standing bug with PLA data collection is that the first counter measurement is incorrect and does not contain a complete measurement. To resolve that, you just overwrite the first measurement with the second, as shown here:

# 3. Fix issue with 1st row in the counters
$Counters[0] = $Counters[1]

Obtaining CPU Statistics

You can pull out basic CPU statistics using this syntax:

# 4. Obtain basic CPU stats
$CN = '\\SRV1\Processor(_Total)\% Processor Time'
$HT = @{
 Name = 'CPU'
 Expression = {[System.Double] $_.$CN}
}
$Stats = $Counters |
  Select-Object -Property $HT |
    Measure-Object -Property CPU -Average -Minimum -Maximum

These statements measure the collection of performance statistics to find the maximum, minimum, and average CPU measurement.

Determining the 95th Percentile

In reviewing the basic performance statistics generated, it's likely that over the measurement period you might see high CPU measurements. To put such measurements into overall context, it's useful to calculate a 95th percentile measurement, as follows:

# 5. Add 95th percent value of CPU
$CN = '\\SRV1\Processor(_Total)\% Processor Time'
$Row = [int]($Counters.Count * .95 )
$CPU = ($Counters.$CN | Sort-Object)
$CPU95 = [double] $CPU[$Row]
$AMHT = @{
  InputObject = $Stats
  Name        = 'CPU95'
  MemberType  = 'NoteProperty'
  Value       = $CPU95
}
Add-Member @AMHT

These commands sort the CPU performance measurements and then find the row equating to 95%. That is the measurement number that is at the 95th percentile, the value that 95% of the CPU measurements are below (and 5% above). While you may have a single high CPU usage of, say, 99%, if 95% of all measurements are below, say, 10%, then the server probably has adequate CPU power.

Combining CPU Measurements

You next combine the various CPU measurements into a single variable.

# 6. Combine the results into a single variable
$Stats.CPU95   = $Stats.CPU95.ToString('n2')
$Stats.Average = $Stats.Average.ToString('n2')
$Stats.Maximum = $Stats.Maximum.ToString('n2')
$Stats.Minimum = $Stats.Minimum.ToString('n2')

These statements format the minimum, maximum, average, and 95th percentile CPU measurements into a single variable.

Displaying CPU Statistics

You can display the results of the calculations by piping the results to Format‐Table.

# 7. Display statistics
$Stats | Format-Table

You can see the output from this command in Figure 10.13.

image

Figure 10.13: Viewing CPU Information

In the output, you can see that there were 2,603 measurements used in this calculation. Also, CPU usage averages 1.54%, with its 95th percentile at 5.14%. For most uses, these measurements show a fairly low CPU usage for this server over the measurement time period. The 95th percentile value, 5.14%, indicates that CPU usage was low almost all the time during the measurement period. For any general Windows server, this suggests that CPU is not a performance bottleneck.

The commands in this section pull together the CPU status for one day for one system. If you are managing multiple servers, you could implement performance counters on each one (as shown in “Collecting Performance Information Using PLA”). You could add counters to the counter set, for example, to record network traffic for your hosts.

Creating a Performance Monitoring Graph

The performance summary you saw in “Reporting on PLA Performance Data” can give you a high‐level view of performance information. You can also use the performance details captured by PLA and create graphs to show performance over time. There are two ways to create a graph of performance data. You can use perfmon.exe to view the performance information captured (using BLG format), as you saw in the previous section, or you can use .NET to create more customized graphs. This section looks at creating a customized graph of CPU usage over time for SRV1.

Before You Start

For this section, you use SRV1 and make use of the performance measurement data you collected in “Collecting Performance Information Using PLA.” You use the data visualization features of .NET Core. These classes also exist in the full .NET Framework, which means you could use the code here with Windows PowerShell.

Loading the Forms Assembly

You create a graph using .NET; however, by default, PowerShell does not load the assembly containing the necessary objects. You do that with this code:

# 1. Load the Forms assembly
Add-Type -AssemblyName System.Windows.Forms.DataVisualization

Importing Performance Data

You use the same technique you used in “Reporting on PLA Performance Data” to import the data and fix row 0.

# 2. Import the CSV data from earlier, and fix row 0
$CSVFile     = Get-ChildItem -Path C:\PerfLogs\Admin\*.csv -Recurse
$Counters    = Import-Csv $CSVFile
$Counters[0] = $Counters[1] # fix row 0 issues

Creating a Chart Object

Next you create a chart object.

# 3. Create a chart object
$TYPE     = 'System.Windows.Forms.DataVisualization.Charting.Chart'
$CPUChart = New-Object -Typename $TYPE

Defining Chart Dimensions

You define a width and height for the chart with this snippet:

# 4. Define the chart dimensions
$CPUChart.Width  = 1000
$CPUChart.Height = 600
$CPUChart.Titles.Add("SRV1 CPU Utilisation") | Out-Null

You can adjust the dimensions (and the chart title) as you need.

Defining the Chart Area

You can also create an area in the chart where .NET will place the performance graph.

# 5. Create and define the chart area
$TYPE2 = 'System.Windows.Forms.DataVisualization.Charting.ChartArea'
$ChartArea = New-Object -TypeName $TYPE2
$ChartArea.Name        = "SRV1 CPU Usage"
$ChartArea.AxisY.Title = "% CPU Usage"
$CPUChart.ChartAreas.Add($ChartArea)

These statements first create a new chart area, provide a name and y‐axis titles, and then add the chart area to the chart.

Identifying the Date/Time Column

You use the following statement to work out which column in the performance counter information holds the date/time for each measurement:

# 6. Identify the date/time column
$Name = ($Counters[0] | Get-Member |
          Where-Object MemberType -EQ "NoteProperty")[0].Name

The first note property in each counter measurement holds the date and time of the counter measurement.

Adding Performance Data to the Chart

You can use the following statements to add the performance counter information to the chart:

# 7. Add the data points to the chart.
$CPUChart.Series.Add("CPUPerc")  | Out-Null
$CPUChart.Series["CPUPerc"].ChartType = "Line"
$CPUCounter = '\\SRV1\Processor(_Total)\% Processor Time'
$Counters |
  ForEach-Object {
   $CPUChart.Series["CPUPerc"].Points.AddXY($_.$Name,$_.$CPUCounter) |
        Out-Null
  }

Saving a Chart Image

With the previous statements, .NET has built the chart. You can now save the chart as a graphic file in a folder (after making sure the folder exists).

# 8. Ensure folder exists, then save the chart image as
#    a png file in the folder
$NIHT = @{
  Path        = 'C:\Perflogs\Reports'
  ItemType    = 'Directory'
  ErrorAction = 'SilentlyContinue'
}
New-Item @NIHT
$CPUChart.SaveImage("C:\PerfLogs\Reports\SRV1CPU.Png", 'PNG')

Viewing the Chart Image

The final step in this section is to view the generated chart.

# 9. View the chart image
& C:\PerfLogs\Reports\Srv1CPU.Png

You can see the output from these steps in Figure 10.14.

The information in the chart you see may differ from the figure because your SRV1 may be performing differently in your environment.

This section shows how you can use .NET's data visualization capabilities to build a performance graph. In production you would probably want to add to this graph to add in, for example, memory utilization, I/O, and network traffic.

image

Figure 10.14: Viewing the CPU usage chart

Creating a System Diagnostics Report

In “Collecting Performance Information with PLA,” you saw how you can use PLA to create customized data collection. Windows, since Windows Vista and Server 2008, has also contained a number of system‐defined data collectors. One of those is the System Diagnostics Report.

Before You Start

You run the commands in this section on SRV1. You can run these commands on any server (or all servers).

Starting the Built‐in Data Collector

You use PLA to start the built‐in Systems Diagnostic Report.

# 1. Start the built-in data collector on the local system
$PerfReportName = "System\System Diagnostics"
$DataSet = New-Object -ComObject Pla.DataCollectorSet
$DataSet.Query($PerfReportName,$null)
$DataSet.Start($true)

Waiting for Data Collector to Finish

The data collection process takes some time to complete the System Diagnostics Report. You build in a wait period, as follows:

# 2. Wait for the data collector to finish
Start-Sleep -Seconds $Dataset.Duration

In Windows Server 2019, by default the System Diagnostics Report has a built‐in duration of 600 seconds. This is probably generous, as the collection takes only 60 seconds.

Saving the Report as HTML

At the end of the wait period, you save the report as HTML.

# 3. Get the report and store it as HTML
$Dataset.Query($PerfReportName,$null)
$PerfReport = $Dataset.LatestOutputLocation + "\Report.html"

Viewing the System Diagnostics Report

You view the report with this command:

# 4. View the report
& $PerfReport

You can see the output—that is, the System Diagnostics Report—in Figure 10.15.

This report is large and contains a lot of details about the server, some of which may be important if you need to perform troubleshooting on it. You could package up the commands in this section and run them via a scheduled task on each production server. Each time you run the diagnostics report, you could save it, say on a file server, for review if and when you need it. PLA and the Windows Task Scheduler make it pretty straightforward to collect and store your diagnostics reports.

The Windows Task Scheduler enables you to run PowerShell scripts at predetermined times. For more information about this tool, see docs.microsoft.com/windows/win32/taskschd/about-the-task-scheduler.

image

Figure 10.15: Viewing the System Diagnostics Report

Reporting on Printer Usage

In many organizations, printers are a shared resource for which management wants to charge back usage to specific departments. You might also want to know which users are using the printers heavily. You can configure Windows Server–based print servers to report details of each print job to an event log for later analysis and/or chargeback.

Before You Start

You run this set of commands on the print server, PSRV, which you set up and used in Chapter 7.

To report on printer usage, you need to have generated some print job on which to report. You can use the built‐in Windows PDF printer for this purpose. If you are in an environment where you have real print devices and Windows printers, you can use that printer to generate better output for your organization.

Turning on Print Job Logging

By default, the Windows Printer Server does not log details of print jobs to the Windows Event logs. But you can use wevtutil to turn on logging of printer job details as follows:

# 1. Run WevtUtil to turn on printer monitoring.
wevtutil.exe sl "Microsoft-Windows-PrintService/Operational" /enabled:true

PowerShell 7 does not (currently) include commands to enable or disable event logs.

Defining a Get‐PrinterUsage Function

Event logging can create a lot of data to wade through, much of it not particularly useful in most cases. An approach to reporting is to create a function to extract the details that are of use and return them as objects. You can then use the objects to create reports of printer usage.

 # 2. Define Get-PrintUsage function Get-PrinterUsage {
 # 2.1 Get events from the print server event log
 $LogName = 'Microsoft-Windows-PrintService/Operational'
 $Dps = Get-WinEvent -LogName $LogName |
          Where-Object ID -eq 307
 Foreach ($Dp in $Dps) {
 # 2.2 Create a hash table with an event log record
   $Document          = [ordered] @{}
 # 2.3 Populate the hash table with properties from the
 # Event log entry
   $Document.DateTime = $Dp.TimeCreated
   $Document.Id       = $Dp.Properties[0].value
   $Document.Type     = $Dp.Properties[1].value
   $Document.User     = $Dp.Properties[2].value
   $Document.Computer = $Dp.Properties[3].value
   $Document.Printer  = $Dp.Properties[4].value
   $Document.Port     = $Dp.Properties[5].value
   $Document.Bytes    = $Dp.Properties[6].value
   $Document.Pages    = $Dp.Properties[7].value
 # 2.4 Create an object for this printer usage entry
   $UEntry = New-Object -TypeName PSObject -Property $Document
 # 2.5 And give it a more relevant type name
   $UEntry.pstypenames.clear()
   $UEntry.pstypenames.add("Wiley.PrintUsage")
 # 2.6 Output the entry
   $UEntry
 } # End of foreach
} # End of function

This function begins by getting all the printer log entries relating to completed print jobs (with the ID 307). It then pulls out key details from the event log entries and creates an object containing the details chosen.

At the end of the function, you adjust the object type name by changing it to "Wiley.PrintUsage". This allows you to create XML that you can add to your system to format the output nicely. For more details on creating format.ps1xml files, see docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_format.ps1xml?view=powershell-7.

Creating Print Output

To view printer event log entries, you need to produce some actual print output. To demonstrate print reporting, you can use the Microsoft Print to PDF printer that is built into Windows.

# 3. Create three print jobs
$PrinterName = "Microsoft Print to PDF"
'aaaa' | Out-Printer -Name $PrinterName
'bbbb' | Out-Printer -Name $PrinterName
'cccc' | Out-Printer -Name $PrinterName

Each time you send text to Out‐Printer, Windows opens up a dialog box allowing you to save the PDF file to a named location. Save each of the output files separately. By default, this printer sends PDF output to your Documents folder in your user profile, which is adequate for the purposes of this demonstration.

Viewing PDF Output Files

You can use Get‐ChildItem to view the PDF files created by the previous step.

# 4. View PDF output
Get-ChildItem $Env:USERPROFILE\Documents\*.pdf

You can see the output from this command in Figure 10.16.

image

Figure 10.16: Viewing PDF output files

Viewing Printer Usage

With those three print files created using the Microsoft Print to PDF printer, you can run the Get‐PrinterUsage function to output details of the print jobs on PSRV.

# 5. Get printer usage
Get-PrinterUsage |
  Sort-Object -Property  DateTime |
    Format-Table

You can see the output from this command in Figure 10.17.

image

Figure 10.17: Viewing printer usage

The output shows the three print jobs that you created earlier in this section. For production printers in use in your organization, you might expect significantly more event log records.

In Figure 10.17, you see the output generated at the PowerShell console. As an alternative, you could package up these steps into a script you can run regularly via the Windows Task Scheduler to produce regular printer usage reports. You can update the steps to add any information that you might find useful in your environment.

Creating a Hyper‐V Status Report

In Chapter 8, you set up and managed Hyper‐V. In this section, you create a Hyper‐V status report to show basic information about your Hyper‐V servers and the VMs running on those servers.

To build the report, you create a few hash tables that contain the key information you add to the report. Then you format those hash tables and add the formatted information to the overall report. The report itself contains two sets of information: details about the Hyper‐V host and details of each VM. Depending on your needs, you can extend either of these to provide the details you need.

Before You Start

You run this section on the HV1 (or HV2, wherever HVDirect is) server, which you created in Chapter 8. Run this after you have created the HVDirect VM on HV1. If you moved the HVDirect VM to HV2 as described later in that chapter, ensure that HVDirect is moved back to HV1.

You should also note that the HV1 host, however, runs only one VM ( HVDirect), so the output may not be representative of a busy Hyper‐V server.

Creating a Basic Report Object Hash Table

You begin by creating an ordered PowerShell hash table. You use this hash table to hold details of the Hyper‐V server.

# 1. Create a basic report object hash table
$ReportHT = [Ordered] @{}

Adding Host Details to the Report

Next you obtain basic VM Host details from WMI and add them to the report hash table.

# 2. Get the host details and add them to the report hash table
$HostDetails = Get-CimInstance -ClassName Win32_ComputerSystem
$ReportHT.HostName = $HostDetails.Name
$ReportHT.Maker = $HostDetails.Manufacturer
$ReportHT.Model = $HostDetails.Model

These statements add the host name, make, and model of your Hyper‐V server to the report.

Adding PowerShell and OS Version

You obtain the PowerShell version from the $PSVersiontable built‐in object and retrieve details of the OS from WMI. You then add them to the report hash table.

# 3. Add the PowerShell and OS version information
# Add PowerShell Version
$ReportHT.PSVersion = $PSVersionTable.PSVersion.ToString()
# Add OS information
$OS = Get-CimInstance -Class Win32_OperatingSystem
$ReportHT.OSEdition    = $OS.Caption
$ReportHT.OSArch       = $OS.OSArchitecture
$ReportHT.OSLang       = $OS.OSLanguage
$ReportHT.LastBootTime = $OS.LastBootUpTime
$Now = Get-Date
$UTD = [float] ("{0:n3}" -f (($Now -$OS.LastBootUpTime).Totaldays))
$ReportHT.UpTimeDays = $UTD

This section adds the PowerShell version you are using to run this script along with details about the OS.

Depending on which version of Windows Server you used to create the HVDirect VM, you may observe a different OS version in the final report output.

Adding Processor Count

You retrieve a count of the CPUs and add it to the report.

# 4. Add a count of processors in the host
$PHT = @{
    ClassName  = 'MSvm_Processor'
    Namespace  = 'root/virtualization/v2'
}
$Proc = Get-CimInstance @PHT
$ReportHT.CPUCount = ($Proc |
    Where-Object elementname -match 'Logical Processor').count

If your host has hyper‐threading enabled, WMI views each hyperthreaded core separately. Thus, for a host with two physical processors, each of which has 6 cores, WMI would return 12 CPUs (without hyper‐threading) or 24 (with hyper‐threading enabled). To learn more about the allocation of CPU cores to VMs, see docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/manage/manage-hyper-v-scheduler-types.

Adding Current CPU Usage

You can use the Get‐Counter command to retrieve the current CPU usage and add that information to the report hash table.

# 5. Add the current host CPU usage
$Cname = '\processor(_total)\% processor time'
$CPU = Get-Counter -Counter $Cname
$ReportHT.HostCPUUsage = $CPU.CounterSamples.CookedValue

Adding Total Hyper‐V Host Physical Memory

Next, you retrieve the total memory you have given the HV1 host and add it to the report.

# 6. Add the total host physical memory
$Memory = Get-CimInstance -Class Win32_ComputerSystem
$HostMemory = [float] ( "{0:n2}" -f ($Memory.TotalPhysicalMemory/1GB))
$ReportHT.HostMemoryGB = $HostMemory

Adding Memory Assigned to VMs

You can determine how much memory has been assigned to all the VMs on the server.

# 7. Add the memory allocated to VMs
$Sum = 0
Get-VM | Foreach-Object {$Sum += $_.MemoryAssigned}
$Sum = [float] ( "{0:N2}" -f ($Sum/1gb) )
$ReportHT.AllocatedMemoryGB = $Sum

Creating the Host Report Object

You have created a hash table containing details of your Hyper‐V server. You next create a new report object.

# 8. Create the host report object
$Reportobj = New-Object -TypeName PSObject -Property $ReportHT

Creating the Report Header

Next, you create the header for the Hyper‐V report and add it to the report object.

# 9. Create report header
$Report =  "Hyper-V Report for: $(hostname)`n"
$Report += "At: [$(Get-Date)]"

Adding the Report Object to the Report

You add the report object, containing the details of your Hyper‐V host, to the report.

# 10. Add report object to report
$Report += $Reportobj | Out-String

Creating an Array for the VM Details

In the steps so far, you have created a report that contains details about the Hyper‐V host, to which you add details about the VMs.

# 11. Create VM details array
#     VM related objects
$VMs = Get-VM -Name *
$VMHT = @()

Getting VM Details

You populate the VM details array with information from each VM. For each VM on the host, you use details returned from Get‐VM and add each VM's details to the array.

# 12. Get VM details
Foreach ($VM in $VMs) {
  # Create VM Report hash table
  $VMReport = [ordered] @{}
  # Add VM's Name
  $VMReport.VMName = $VM.VMName
  # Add Status
  $VMReport.Status = $VM.Status
  # Add Uptime
  $VMReport.Uptime = $VM.Uptime
  # Add VM CPU
  $VMReport.VMCPU = $VM. 
  # Replication Mode/Status
  $VMReport.ReplMode = $VM.ReplicationMode
  $VMReport.ReplState = $VM.ReplicationState
  # Create object from Hash table, add to array
  $VMR = New-Object -TypeName PSObject -Property $VMReport
  $VMHT += $VMR
}

Completing the Report

You complete building the report by adding the details of each VM, contained in the $VMHT to the report.

# 13. Finish creating the report
$Report += $VMHT | Format-Table | Out-String

Viewing the Report

You now have the report, held in the $Report variable, which you can view as follows:

# 14. Display the report
$Report

You can view the report output in Figure 10.18.

image

Figure 10.18: Viewing the VM report

In this case, you are viewing the report in the PowerShell Console or in VS Code. If you want to run this report regularly, you could use the Windows Task Scheduler to run it as needed and modify the script to send the report via email.

This report is somewhat small in that there is only one VM. In production you typically have more than one VM running on a given host. If you run this report against a busier Hyper‐V host, you can see more information, as shown in Figure 10.19.

Reviewing Event Logs

With the introduction of Windows NT 3.1 (on which both Windows 10 and Windows Server are based), Microsoft introduced the Windows event logs. Today, those key event logs, Application, System, and Security, contain a large number of event entries. Each entry alerts you to some fact that various developers thought you should know about.

image

Figure 10.19: Viewing the VM report on another Hyper‐V host

These logs were extended with Windows Vista to include Application and Services logs—event logs for individual applications and services. In a Windows Server 2019 host, such as DC1, there are more than 400 separate logs. Among those, only about 100 have more than 750,000 log entries. Each entry is yet another fact that a developer felt you should know about.

The vast majority of event log entries are not interesting to most IT professionals, at least most of the time. However, troubleshooting issues such as setting up WMI filters or management information like printer usage can make use of these additional logs.

You can retrieve the Windows event log entries by using the Get‐WinEvent command. You can use that command to retrieve a list of event logs (using the ‐ListLog parameter) or entries from a given log. You can also filter events using XPath queries, structured XML queries, and hash table queries. You can find some examples of XML and hash table queries at docs.microsoft.com /powershell/module/microsoft.powershell.diagnostics/get‐winevent?view=powershell‐7.

You can use the Get‐WinEvent command to retrieve logon events from the system's Security event log. You can then report on the users who logged on to a given system, including date and time. In Windows, there are a number of different kinds of logon events such as logging on to a physical host, logging on to a VM or remote host using the Hyper‐V vmconnect.exe program, or using the Remote Desktop client. Services and drivers can also generate logon events.

Before You Start

You run this section on DC1, a domain controller in the Reskit.Org domain.

Counting Event Logs

You can use the Get‐WinEvent cmdlet to get a listing of all the event logs on a given system.

# 1. Count logs and logs with records
$EventLogs  = Get-WinEvent -ListLog *
$Logs       = $EventLogs.Count
$ActiveLogs = ($Eventlogs | Where-Object RecordCount -gt 0).count
"On $(hostname) there are $Logs logs available"
"$ActiveLogs have records"

You can see in Figure 10.20 that on DC1, there are a total of 409 separate event logs, of which 115 have records.

image

Figure 10.20: Viewing a count of event logs

The number of event logs on a given system can vary as you add more features to the system.

Getting the Total Number of Event Records

You can also look at each event log and calculate the total number of event log entries across all the logs.

# 2. Get total event records available
$EntryCount = ($EventLogs | Measure-Object -Property RecordCount -Sum).Sum
"Total Event logs entries: [{0:N0}]" -f $EntryCount

You can see in Figure 10.21 that on my system there are a total of 783,122 log events across all logs. The number of entries you see when you run this code is likely to be different depending on how long your system has been up, whether event logs have been cleared, and so on.

image

Figure 10.21: Viewing the total number of event logs

Getting Event Counts in Key Logs

You can also see the number of event log entries in the System, Application, and Security logs.

# 3. Get count of events in System, Application and Security logs
$Syslog = Get-WinEvent -ListLog System
$Applog = Get-WinEvent -ListLog Application
$SecLog = Get-WinEvent -ListLog Security
"System Event log entries:      [{0,10:N0}]" -f $Syslog.RecordCount
"Application Event log entries: [{0,10:N0}]" -f $Applog.RecordCount
"Security Event log entries:    [{0,10:N0}]" -f $Seclog.RecordCount

You can see the output from these commands in Figure 10.22.

image

Figure 10.22: Viewing the total numbers of System, Application, and Security event logs

This snippet uses PowerShell's ‐f (format) operator and .NET's composite formatting features. You can find more details about composite formatting at docs.microsoft.com / dotnet/standard/base‐types/composite‐formatting. This technique allows you to create good‐looking output. In this case, you format the number of log entries into a number with thousand separators and right‐justify it within a 10‐character space. That way, the numbers line up vertically within the output, which makes reading the information much easier. Using composite formatting is a common practice in reporting scripts.

Getting All Windows Security Log Events

You can use Get‐WinEvent to get all the log entries in the Security log and display a count of how many records you found with this syntax:

# 4. Get all Windows Security Log events
$SecEvents = Get-WinEvent -LogName Security
"Found $($SecEvents.count) security events"

You can see the output from these commands in Figure 10.23.

image

Figure 10.23: Viewing the Security log events

As you saw earlier, on a busy server there can be a large number of events in the Security log. This means getting all the events and performing any detailed processing of the Security log will be slow. Retrieving all the events in the security log is going to take a while, possibly half an hour, so please be patient. Because retrieving the security events takes so long, in most cases event log processing is best done as a background task run by the Windows Task Scheduler.

Getting Logon Events

You can pick out individual logon events.

# 5. Get Logon Events
$Logons = $SecEvents | Where-Object ID -eq 4624   # logon event
"Found $($Logons.count) logon events"

You can view the different types of logons recorded on DC1 in Figure 10.24.

image

Figure 10.24: Getting logon events

Creating a Logon Type Summary

In Windows, there are several different logon types, as described in detail at docs.microsoft.com/previous-versions/windows/it-pro/windows-server-2003/cc787567 (v=ws.10). A logon type of 2 indicates a local console logon (that is, logging on to a physical host), while a logon type of 10 indicates logon over RDP. Other logon types include service logon (type 5), Batch or scheduled task (type 4), and console unlock (type 7). You can review the logon events and summarize the different logon types with this code:

# 6. Create summary array of logon events
$MSGS = @()
Foreach ($Logon in $Logons) {
    $XMLMSG = [xml] $Logon.ToXml()
    $t = '#text'
    $HostName   = $XMLMSG.Event.EventData.data.$t[1]
    $HostDomain = $XMLMSG.Event.EventData.data.$t[2]
    $Account    = $XMLMSG.Event.EventData.data.$t[5]
    $AcctDomain = $XMLMSG.Event.EventData.data.$t[6]
    $LogonType  = $XMLMSG.Event.EventData.data.$t[8]
    $MSG = New-Object -Type PSCustomObject -Property @{
       Account   = "$AcctDomain\$Account"
       Host      = "$HostDomain\$Hostname"
       LogonType = $LogonType
       Time      = $Logon.TimeCreated
    }
    $MSGS += $MSG
}

Each event log entry contains event details in XML stored as a text attribute of the event log entry. You can use PowerShell's built‐in XML support to get the account that logged on, the host, and at what time. The code creates an array of objects that summarize the details.

The approach of parsing the event entry's XML to pull out relevant details of the event is one you can use to report from any log. Finding the specific details of what properties are contained at what position in the XML may require you to use the Event Viewer ( eventvwr.exe) to work out where to find the information you need within the entry or the entry's XML.

There are several different types of audit log entries that Windows writes as appropriate, such as a logon that failed because the account is disabled or expired. You may want to extend the set logon event codes to look for other possibly suspicious activity.

Displaying Logon Events by Logon Type

You can view the logon summary as follows:

# 7. Display results
$MSGS |
  Group-Object -Property LogonType |
    Format-Table Name, Count

You can view the different types of logons in Figure 10.25.

image

Figure 10.25: Getting logon event types

As you can see in the output, there are a large number of logons across most logon types. Since DC1 is a virtual machine, you would not expect any interactive logons but would expect logon type 10 events, logging on via RDP.

Examining RDP Logons

To drill down further, you can view the individual RDP logons as follows:

# 8. Examine RDP logons
$MSGS | Where-Object LogonType -eq '10'

You can see the type 10 logon events recorded on DC1 in Figure 10.26.

image

Figure 10.26: Getting RDP logons

Summary

In this chapter you have used a variety of tools, orchestrated by PowerShell 7, that create reports to help you manage your organizations. You saw how you can retrieve information from AD about logon failures and view users someone may have added to a high privilege security group. You used the FSRM reporting features to generate reports on the use of a file server. You can get preformatted reports or use the XML returned from FSRM to create your own report. PLA enables you to collect performance information, and you can use that information to create basic reports. You can also take that information and use .NET's visualization tools to create performance graphs. You also used PLA to run the System Diagnostics Report. You saw how to enable the Windows printing subsystem to log details of printer usage. Once you enable the logging, you can then retrieve usage details. You saw how you retrieve details of VMs and create Hyper‐V status reports. Finally, you looked at one way to comb through the event log and create reports on the contents.