Chapter 3

Manage security operations

The main goal of security operations is to maintain and restore the security assurances of the systems as adversaries attack them. The National Institute of Standards and Technology (NIST) describes the tasks of security operations in their Cybersecurity Framework, which are Detect, Respond, and Recover. To be able to execute those functions in a cloud environment, you not only need the correct approach, but you also need to understand how the native tools work to provide you the data you need to limit the time and access an attacker can get to valuable systems and data.

Azure has native capabilities that you can leverage to continuously monitoring the security operations of your environment continuously, so you can quickly identify potential threats to your workloads.

Skills in this chapter:

Skill 3.1: Configure security services

Security operations start by ensuring that you have visibility and access to the underlying logs of the different services that you want to monitor. Azure Monitor can collect and store data from Azure applications, operating systems, Azure resources, Azure subscriptions, Azure tenant, and custom resources. This section of the chapter covers the skills necessary to configure security services, which is based on Azure Monitor, according to the Exam AZ-500 outline.

Configure Azure Monitor

One common question when it comes to the usage of Azure Monitor is, “How do I enable it?” Azure Monitor is automatically enabled when you create a new Azure subscription. At that point, activity log and platform metrics are automatically collected. The other common question is, “Can Azure Monitor also monitor resources that are on-premises?” Although Azure Monitor implies (by the name) that the resources are in Azure, it also collects data from virtual machines and applications in other clouds and on-premises to monitor.

For this reason, before making any sort of configuration in Azure Monitor, is important to understand some foundational concepts of this platform. The section that follows covers some key principles.

Reviewing Azure Monitor concepts

The diagram shown in Figure 3-1 helps you better understand the breadth of Azure Monitor and the different areas that it touches.

This diagram shows the different components of the Azure Monitor solution and how the data is ingested from multiple locations.

Figure 3-1 Architecture diagram of the Azure Monitor solution

On the left side of the diagram shown in Figure 3-1, you have the different layers that represent the components that will generate logs, which can be ingested by Azure Monitor. From the application and operating system perspective, the machine can be physically located on-premises, in Azure or in another cloud provider. Aside from these data sources, you can also ingest data from different Azure resources, subscriptions, and the Azure tenant itself. This data is ingested into the Log Analytics Workspace, which is part of the Azure Monitor solution and once the data is there, you can query it using Kusto Query Language, which uses schema entities that are organized in a hierarchy similar to SQL’s databases, tables, and columns.

The last three layers that appear in the left side of the diagram shown in Figure 3-1 represent the three major layers in Azure where you can obtain logging information. The definition of each layer is shown here:

  • Azure Resources Here, you will be able to obtain resource logs, which has operations that were executed in the data plane level of Azure. An example of that would be getting a secret from Azure Key Vault. These logs are also referred to as diagnostic logs.

  • Azure Subscription Here, you will be able to obtain activity logs, which has operations that were executed in the management plane. You should review these logs when you need to determine the answer for the what (what operation was made), who (who made this operation), and when (when this operation was made). For example, if a VM was deleted, you should go to Azure Activity Log to find out the answer of the what, who, and when regarding the delete VM operation.

  • Azure Tenant Here, you will be able to obtain the Azure Active Directory logs. In this layer you have the history of sign-in activity and audit trail of changes made in the Azure Active Directory.

Is very important to understand those layers when studying to the AZ-500 exam because you may have scenarios where you will need to select the right option regarding where to look for a specific information. For example, the Contoso administrator wants to identify the user who stopped the virtual machine two weeks ago; where he should be searching for this information? If you answered Azure Activity Log, you are correct. As mentioned before, in Activity Log you will find management plane operations and the identification of the what, who, and when an operation was performed.

Metrics are another type of information that can be ingested. Metrics are numerical values that describe some aspect of a system at a particular point in time. Telemetry, such as events and traces, and performance data are stored as logs so that it can all be combined for analysis. This type of information can be used during scenarios where you need to collect security-related performance counters from multiple VMs and create alerts based on certain thresholds.

Because Azure Monitor starts collecting data from a resource upon the creation of that resource, it is important to know where to look when you need information about those resources. Many resources will have a summary of performance data that are relevant for that resource, and this is usually located in the Overview page of that resource. For example, in the Overview option of an Azure storage account, you will see insights regarding the average latency, egress data, and requests, as shown in Figure 3-2.

This is a screenshot of the Overview page in an Azure storage account showing a summary of the performance counters that are automatically collected by Azure Monitor.

Figure 3-2 Summary of storage account performance insights

If you need to query logs that have operations that were executed in the management plane, you should use the Azure Activity Log. To access the Activity Log, follow these steps:

  1. Navigate to the Azure portal at https://portal.azure.com.

  2. In the search bar, type activity and under Services, click Activity Log. The Activity Log page appears, as shown in Figure 3-3.

    This is a screenshot of the Azure Activity log with the options to query operations that happened in the management plane.

    Figure 3-3 Activity log initial page

  3. Here, you can use the Timespan filter to adjust the timeline that you want to perform your query. For this example, this filter was changed for the last one hour, and after applying the change, the result appears, as shown in Figure 3-4.

    This is a screenshot of the Azure Activity log with the result of the query for the last one hour of activities.

    Figure 3-4 Activity log results after filtering

  4. The result shows a summary of the operation, including the time status, time stamp and who initiated the event. If you want more detailed information about the operation, you can expand the operation name field and click on it. There you will have the details of the operation in the JSON tab.

As mentioned in the previous section of this chapter, the other type of data that you may want to use is metric. If you are monitoring a virtual machine and you need more metrics beyond of the ones that appear in the Overview page, you can go to the Metrics page and from there customize the metrics that you want to monitor as shown in the example of Figure 3-5.

This is a screenshot of the Metrics page for an Azure VM, showing a graph that that represents the OS disk read bytes per second.

Figure 3-5 Visualizing VM metrics

Create and customize alerts

Another important feature in Azure Monitor is the capability to create alerts for different types of events. You can use the following type of data to generate alerts with the data that was collected for the past 30 days (by default):

  • Metric values

  • Log search queries

  • Activity log events

  • Health of the underlying Azure platform

  • Tests for website availability

In Figure 3-5, you can see an option right above the New Alert Rule chart. This option enables you to go from this dashboard directly to the Alert dashboard and create an alert rule using the metric that is currently shown on screen, which, in this case, is to monitor OS Disk Read Bytes/Sec, as shown in Figure 3-6.

This is a screenshot of the create alert rule page activated from the metrics page, which gives the advantage of pre-populating the scope and condition fields.

Figure 3-6 Creating an alert rule

The Create Alert Rule page has some important parameters that must be filled, but when you activate this page from the Metrics page where you already configured the metrics that you want to monitor, this page prepopulates the Scope (which is the target resource that you want to monitor) and the Condition (which is the rule logic that will be used to trigger the alert). While the scope has the resource that you want to monitor, the condition might need some adjustments according to your needs. To customize the condition, just click the condition name and the Configure Signal Logic blade appears, as shown in Figure 3-7.

This is a screenshot of the current alert logic with the pre-defined configurations.

Figure 3-7 Customizing the alert logic

The first part of this blade has the performance counter name that you are using for this rule and a sample chart with data over the last 6 hours. The second part of this blade is where you configure the threshold. In the Alert Logic section, you can change the toggle to be Static (you provide a specific value as threshold) or Dynamic (which uses machine learning to continuously learn about the behavior pattern). In this case, the Contoso administrator wants to receive an alert if the average OS Disk Read Bytes/Sec counter is higher than 3 MB, which means Static is the best option to use. In this case, the operator remains greater than, the Aggregation Type remains average, and you just need to enter the value (in this case, 3) in the Threshold Value field. The Condition Preview section explains the logic so you can confirm that this is what you want to do. The Evaluated Based On section is where you can configure the Aggregation Granularity (Period) option, which defines the interval over which the datapoints are grouped. You can also configure the Frequency Evaluation, which defines how often this alert rule should be executed. The Frequency Evaluation should be the same as the Aggregation Granularity or higher. Once you finish, click the Done button.

Next, configure the Action Group section, which allows you to configure the type of notification that you want to receive. To configure this option, click Select Action Group, and in the Select An Action Group To Attach To This Alert Rule blade, click the Create Action Group option; the Add Action Group blade appears, as shown in Figure 3-8.

This is a screenshot with the options to configure the action group.

Figure 3-8 Action group configuration

On this blade, you should start by typing a name for this action group; this can be a long name that helps you identify what this group does. In the Short Name field, add a short name, which appears in emails or messages that might be sent by this alert. Select the subscription and resource group to where this action group resides; under Action Name, type a name for the first action. Notice that there are many fields for actions; that’s because you can have actions such as sending an email, sending a SMS message, or running a runbook, among others. In his case, the Contoso administrator wants to send an email to a distribution list and send an SMS message to the on-call phone. For the action type, select Email/SMS Message/Push/Voice, and the Email/SMS Message/Push/Voice blade appears. In this blade, type the email and the SMS number after that, click OK and then OK again.

To finish the alert creation, you just need to add an Alert Rule Name and a brief Description and then choose the Severity of the alert from the drop-down menu. The severity should represent the level of criticality that you want to assign for this rule. In this case, the Contoso administrator understands that when this threshold is reached, an important (not critical) alert should be raised, which, in this case, could be represented by Sev 2, as shown in Figure 3-9.

This is a screenshot of the Alert Rule Details section on the Create Alert Rule page, where you can configure the Alert Rule Name, Description, and Severity. The Enable Alert Rule Upon Create check box is selected.

Figure 3-9 Configuring the alert rule details

Ideally, you should enable this rule upon creation, which is why the Enable Alert Rule Upon Creation check box is selected by default. To commit all the changes, click the Create Alert Rule button.

Once you finish creating the rule, you should receive an email advising you that you were added to the action group. A sample of this email is shown in Figure 3-10.

This is a screenshot of the email notification generated by Azure Monitor with the details about the action group.

Figure 3-10 Email notification generated by Azure Monitor

You should also receive the SMS message. Notice that the short name that you used appears in the message, as shown in Figure 3-11.

This is a screenshot of the SMS notification generated by Azure Monitor with the use of the short name that was configured in the action group.

Figure 3-11 SMS notification generated by Azure Monitor

Now that you created an alert based on a metric that you used previously, the question is, What if I need to change the alert rule?” If you want to be able both to see and change alerts, you can use the Alerts dashboard. Follow the steps below to access this dashboard.

  1. Navigate to the Azure portal at https://portal.azure.com.

  2. In the search bar, type alert, and under Services, click Alerts.

  3. Click the Manage Alert Rules button, and the Rules page appears, as shown in Figure 3-12.

    This is a screenshot of the Rules page where you can create new alert rules or change existing rules.

    Figure 3-12 Activity log results after filtering

  4. The alert rule that you created appears in the list. To edit the rule, you just need to click it. If you need to create a new alert rule, click the New Alert Rule button. Both steps will lead you to the Create Alert Rule page, which was previously shown in Figure 3-6.

Once an alert is fired, the status of the alert is set to New, which means the rule was detected, but it hasn’t been reviewed. Keep in mind that the Alert State is different and independent of the Monitor Condition. While the Alert State is set by the user, the Monitor Condition is automatically set by the system. When an alert is fired, the alert’s Monitor Condition is set to Fired. When the underlying condition that caused the alert to fire clears (for example, if your condition was to send an alert if the CPU reaches 80 percent utilization, and then the CPU utilization dropped to 50 percent) the monitor condition is set to Resolved. You can see this information in the email—assuming you configured the rule to send an email—as shown in Figure 3-13.

This is a screenshot of the sample email that is received when an alert is resolved.

Figure 3-13 Email notification stating that an alert was resolved

Configure diagnostic logging and log retention

In Azure, each resource requires its own diagnostic setting. In these settings, you define the categories of logs and metric data that should be sent to the destinations defined in the setting. Also, you need to define the destination of the log, which includes sending it to the Log Analytics workspace, Event Hubs, and Azure Storage.

Is important to mention that each resource can have up to five diagnostic settings. This means that if the scenario requirement states that you need to send logs to Log Analytics workspace and Azure Storage, you will need two diagnostic settings. Follow these steps to configure the diagnostic settings:

  1. Navigate to the Azure portal at https://portal.azure.com.

  2. In the search bar, type monitor and under Services, click Monitor. The Monitor | Overview page appears.

  3. In the left navigation pane, under Settings, click Diagnostics Settings; the Monitor | Diagnostic settings page appears, as shown in Figure 3-14.

    This is a screenshot of the Diagnostic Settings page in Azure Monitor, where all resources that can have diagnostic settings are shown.

    Figure 3-14 Diagnostics settings page in Azure Monitor

  4. As you can see, all resources that can have diagnostic settings appear in this list. For this example, click the Front Door resource that was created in the previous chapter.

  5. Click the Add Diagnostic Setting option; the Diagnostics Settings page appears, as shown in Figure 3-15.

    This is a screenshot of the Diagnostic Settings for a Front Door resource. At the right under Destination Details, are three options: Send To Log Analytics, Archive To A Storage Account, and Stream to An Event Hub.

    Figure 3-15 Diagnostic settings for a Front Door resource

  6. In the Diagnostic Setting Name field, type a comprehensive name for this setting.

  7. For this specific resource, you have two types of logs. The first is a metric log, in which you can only select the ones that you need for your scenario; the second is the destination log, which can be Log Analytics, a storage account, or Event Hub.

  8. In this case, the Contoso Administrator needs to be able to easily query Front Door access logs and WAF logs using a comprehensive query language. To meet this requirement, you need to select Log Analytics, which utilizes Kusto Query Language (KQL) to perform queries.

  9. When you select the Send To Log Analytics option, you will see the option to select the subscription and the Log Analytics workspace that you want to utilize (assuming you have one).Make a selection and click the Save button.

  10. After saving, the Save button is no longer available, which indicates that the changes have been committed.

While the previous sample configuration describes the steps to configure Log Analytics workspace as the diagnostic settings destination, the overall settings can vary according to the destination. For example, if you select storage account, the options shown in Figure 3-16 will appear.

This is a screenshot of the storage account settings that can be customized during the Diagnostic Settings configuration of a Front Door resource.

Figure 3-16 Storage account Diagnostic Settings

Notice that when configuring a storage account as your destination, you can customize the retention policy for each log. In a scenario where the requirement is to store the Front Door access logs for 50 days and the WAF logs for 40 days, the best destination for this setting is the storage account because it allows this type of granular configuration.

Consider selecting Event Hub as the destination when you need to stream the data to another platform. For example, you might do this if you need to send the Front Door (could be any other Azure resource) access logs to a third-party security information and event management (SIEM) solution, such as Splunk. In this case, using Event Hub is the best option because it allows the logs to be easily streamed to a SIEM solution.

Monitoring security logs by using Azure Monitor

Because each Azure resource can have different sets of logs and configurations, you need to ensure that you are collecting all logs that affect your security monitoring. For Platform as a Service (PaaS) services such as Azure Key Vault, you just need to configure the diagnostic settings to the target location (Log Analytics workspace, storage account, or Event Hub) where the log will be stored. For Infrastructure as a Service (IaaS) VMs, you need more steps because you want to ensure that you are collecting the relevant security logs from the operating system itself.

Data plane logs are the ones that will give you more information about security-related events in IaaS VMs. Assuming that you already have a Log Analytics workspace that will store this data, you will need to do the following actions to configure Azure Monitor to ingest security logs from VMs. First, enable the Log Analytics VM Extension and collect security events from the operating system. Once the data is collected, you can visualize it using the Log Analytics workspace and perform queries using KQL. Assuming that you already have a Log Analytics workspace created, follow these steps to configure this data collection:

  1. Navigate to the Azure portal at https://portal.azure.com.

  2. In the search bar, type log analytics, and under Services, click Log Analytics Workspaces.

  3. On the Log Analytics Workspaces page, click the workspace in which you want to store the security logs.

  4. In the left navigation pane of the workspace page, under Workspace Data Sources, click Virtual Machines.

  5. Click the virtual machine that you want to connect to this workspace. Notice that the Log Analytics Connection status appears as Not Connected, as shown for the AZ500VM3 in Figure 3-17.

    This is a screenshot of the virtual machines page in the log analytics workspace properties and the current status of those VMs.

    Figure 3-17 Virtual Machines that are available in the workspace

  6. On the VM’s page, click the Connect button, as shown in Figure 3-18.

    This is a screenshot of the virtual machine’s connection details with the option to Connect to the workspace.

    Figure 3-18 Connecting a VM to a workspace

  7. At this point, the Log Analytics agent will be installed and configured on this machine. This process takes a few minutes, during which time the Status shows as Connecting. You can close this page, and the process will continue in background.

  8. After the agent is installed, the status will change to This Workspace.

  9. In the left navigation pane of the main workspace page, under Settings, click Advanced Settings.

  10. On the Advanced Settings page, click Data > Windows Event Logs, as shown in Figure 3-19.

    This is a screenshot of the Window Event Logs selection on the Advanced Settings page.

    Figure 3-19 Configuring the data source for ingestion

  11. In the Collect Events From The Following Event Logs field, type System and select System from the drop-down menu. Click the plus sign (+) to add this log. Leave the default options selected. If you have specific security events that you want to collect, type security and select the appropriate events.

  12. Click the Save button.

  13. Click OK in the pop-up window and close this page.

Azure Monitor also has solutions that can enhance the data collection for different scenarios. This can be extremely helpful for security monitoring. You can also leverage an Azure Resource Manager (ARM) template to deploy the agent in scale; when doing so, you will need two parameters: the workspace ID and the workspace key.

Security and Audit solution

Monitoring solutions leverage services in Azure to provide additional insight into the operation of an application or service. These solutions collect log data and provide queries and views to analyze collected data. Solutions require a Log Analytics workspace to store data collected by the solution and to host its log searches and views.

If you add the Security And Audit solution to your workspace, you automatically will be able to collect Windows security events that are configured according to audit policy best practices. This will allow you to search for specific events in your workspace. Follow these steps to add the Security And Audit solution to your workspace:

  1. Navigate to the Azure portal at https://portal.azure.com.

  2. In the search bar, type solutions, and under Services, click Solutions.

  3. On the Solutions page, click the Add button.

  4. In the Search The Marketplace field, type security and audit and press Enter.

  5. In the search results, click the Security And Audit tile.

  6. Click the Create button.

  7. Select the workspace to which you want to add this solution and click the Create button.

After the solution is added, you can open the Log Analytics workspace by choosing Solutions under General in the left navigation pane. See Figure 3-20.

This is a screenshot of Solutions option in the workspace’s properties showing that the Security And Audit solution (it appears as Security) was added.

Figure 3-20 Log Analytics workspace where the new solution is shown

Searching security events in Log Analytics workspace

Now that the security events are stored in the workspace, you can start searching for events that might indicate suspicious activity. To access the logs in the workspace, go to the workspace’s main page, and in the left navigation pane, click the Logs option under General; the New Query page appears, as shown in Figure 3-21.

This is a screenshot of the New Query page with the option for you to type your KQL query.

Figure 3-21 New Query page

The scenarios that follow provide more examples of how these queries can be useful when investigating security events related to authentication:

  • Contoso’s security admin is investigating a potential lateral movement in Contoso’s network, and the admin knows that one of the ways to perform this lateral movement is by doing account enumeration. He would like to know all computers that were targeted by this enumeration. The query used to accomplish this task is shown here:

    SecurityEvent | where EventID == 4799

    The EventID 4799 is triggered every time a security-enabled local group membership is enumerated. The query result will list all computers that have this event.

  • Fabrikam’s security admin is investigating the use of a nonauthorized software in its environment. He wants to know when this software was launched. He is not sure what the exact command line is for the software, but he knows that it starts with CM. The query used to accomplish this task is shown here:

    SecurityEvent | where EventID == 4688 and CommandLine contains "cm"

    The EventID 4688 is triggered every time a new process is created, and the CommandLine attribute will evaluate the event’s CommandLine field to verify whether it contains the string cm.

  • Contoso’s security admin received a request to report all successful anonymous login attempts coming from the network. The query used to accomplish this task is

    SecurityEvent | where EventID == 4624 and Account contains "anonymous logon" and
    LogonType == 3

    The EventID 4624 is triggered every time a successful log in occurs; the Account attribute will only filter for anonymous login. With the LogonType attribute set to 3, it will only filter for network attempts.

Skill 3.2: Monitor security by using Azure Security Center

In large organizations where it’s necessary to have a centralized standard across multiple subscriptions, it is common to use Azure Management Groups to aggregate all subscriptions that share a common set of policies. Security Center enables you to have a centralized view across multiple subscriptions to ensure you have a better visibility of your cloud security posture. This section of the chapter covers the skills necessary to configure security policies in Security Center according to the Exam AZ-500 outline.

Evaluate vulnerability scans from Azure Security Center

Vulnerability assessment is a key component of any security posture management strategy. Security Center standard tier provides a built-in vulnerability assessment capability for your Azure VMs based on an industry lead vulnerability management solution, Qualys. This integration has no additional cost, as long as Security Center is using the standard tier pricing model. If you are using the Free tier, you will still receive a recommendation to install the vulnerability assessment in your machine, but this recommendation (which is not suggesting the built-in vulnerability assessment) requires you to have the license for your vulnerability assessment solution, which can be Qualys or Rapid7.

Assuming you have the Standard tier enabled, Security Center will identify VMs that don’t have a vulnerability assessment solution installed, and it will trigger a security recommendation suggesting the built-in Qualys extension be installed. This recommendation is similar to the example shown in Figure 3-22.

This is a screenshot of the recommendation to enable the built-on vulnerability assessment solution on virtual machines powered by Qualys.

Figure 3-22 Recommendation to install the built-in Qualys extension

To install this vulnerability assessment solution, you need write permissions on the VM to which you are deploying the extension. Assuming that you have the right level of privilege, you will be able to select the VM from the list shown on the Unhealthy Resources tab and clicking in the Remediate button. This recommendation has the Quick-Fix capability, which means that you can trigger the extension installation directly from this dashboard. Like any extension in Azure, the Qualys extension runs on top of the Azure Virtual Machine agent, which means it runs as Local Host on Windows systems and as Root in Linux systems.

The VMs that already have the agent installed will be listed under the Healthy Resources tab. When Security Center cannot deploy the vulnerability scanner extension to the VMs, it will list those VMs on the Not Applicable Resources tab. VMs might appear in this tab if they are part of a subscription that is using the Free pricing tier or if the VM image is missing the ImageReference class (which is used on custom images and VMs restored from backup). Another reason for a VM to be listed on this tab is if the VM is not running one of the supported operating systems:

  • Microsoft Windows (all versions)

  • Red Hat Enterprise Linux (versions 5.4+, 6, 7.0 through 7.7, 8)

  • Red Hat CentOS (versions 5.4+, 6, 7.0 through 7.7)

  • Red Hat Fedora (versions 22 through 25)

  • SUSE Linux Enterprise Server (versions 11, 12, 15)

  • SUSE OpenSUSE (versions 12, 13)

  • SUSE Leap (version 42.1)

  • Oracle Enterprise Linux (versions 5.11, 6, 7.0 through 7.5)

  • Debian (versions 7.x through 9.x)

  • Ubuntu (versions 12.04 LTS, 14.04 LTS, 15.x, 16.04 LTS, 18.04 LTS)

If you are deploying this built-in vulnerability assessment on a server that has restricted access to the Internet, it is important to know that during the set up process, a connectivity check is done to ensure that the VM can communicate with Qualys’s cloud service on the following two IP addresses: 64.39.104.113 and 154.59.121.74.

Once the extension is installed in the target VM, the agent will perform the vulnerability assessment of the VM through a scan process. The scan result will be surfaced in another security recommendation, which is called Remediate Vulnerabilities Found On Your Virtual Machines (Powered By Qualys). A sample of this recommendation is shown in Figure 3-23.

This is a screenshot of the remediate vulnerabilities found on your virtual machines (powered by Qualys) security recommendation.

Figure 3-23 List of vulnerabilities found during the scan

On this page, you can see the list of findings in the Security Checks section. If you click a specific security check, Security Center will show another blade with the details of that vulnerability, which includes the Impact; Common Vulnerabilities; Exposure (CVE) (located under General Information section); the Description of the type of threat; the Remediation steps; Additional References for this security check; and the list of Affected Resources. See Figure 3-24.

This is a screenshot of the vulnerability number 100387, which is related to Internet Explorer. This blade contains information about this vulnerability and how to remediate it.

Figure 3-24 Vulnerability details blade

The deployment of these recommended remediations are done out-of-band; in other words, you will deploy them outside Security Center. For example, if a security check requires you to install a security update on your target computer, you will need to deploy that security update using another product, such as Update Management. Some other remediations will be more about security best practices. For example, the security check 105098 (Users Without Password Expiration) recommends that you to create a password policy that has an expiration date. This is usually deployed using Group Policy in Active Directory.

Vulnerability scanning for SQL

Another category of vulnerability scanning that is natively available in Security Center is the vulnerability assessment for SQL Servers. This capability is part of the integration of Security Center with SQL Advanced Data Security (ADS) feature. You can enable this feature in the Security Center Pricing Tier setting, which will enable ADS for all databases in the subscription, or you can enable it only on the databases that you want to have this capability.

When you enable ADS, threat protection is available for SQL. Threat protection for Azure SQL Database detects anomalous activities that indicate unusual and potentially harmful attempts to access or exploit databases. For example, an alert that may be generated by this feature is the possible vulnerability to SQL Injection. This alert might indicate a possible vulnerability to SQL injection attacks. Usually there are two possible reasons for a faulty statement: a defect in application code might have constructed the faulty SQL statement, or the application code/stored procedures didn’t sanitize user input.

When Security Center identifies that there are databases that don’t have this feature enabled, it will trigger a security recommendation, as shown in Figure 3-25.

This is a screenshot of a recommendation to enable ADS that is triggered by Security Center.

Figure 3-25 Security recommendation to enable ADS

After this feature is enabled, Security Center also will indicate that you also need to enable the vulnerability assessment for your SQL servers (see Figure 3-26).

This is a screenshot of the recommendation to enable vulnerability assessment in SQL that is triggered by Security Center.

Figure 3-26 Security recommendation to enable vulnerability assessment in SQL

Configure Just-In-Time VM access by using Azure Security Center

When the scenario requires that you reduce the attack surface of IaaS VMs, you should ensure that you are leveraging a Security Center Standard tier capability called Just-In-Time (JIT) VM access. The intent of this capability is to ensure that management ports are not exposed to the Internet all the time. Because the majority of the attacks against IaaS VMs will try to utilize techniques such as RDP or SSH brute force, VMs that have those management ports open will be more susceptible to being compromised.

When you enable JIT VM access, Security Center hardens the inbound traffic to your Azure VMs by creating a Network Security Group (NSG) rule. This rule is based on the selected ports on the VM to which inbound traffic will be locked down. Security Center configures the Network Security Groups (NSGs) and Azure Firewall to allow inbound traffic to the selected ports and requested source IP addresses or ranges, for the amount of time that was specified. After the time has expired, Security Center restores the NSGs to their previous states.

To configure or edit a JIT VM access policy for a VM, you will need write access for the scope of the subscription or resource group for the following objects:

  • Microsoft.Security/locations/jitNetworkAccessPolicies/write

  • Microsoft.Compute/virtualMachines/write

The user who is requesting access to a VM configured with JIT will need read access on the scope of the subscription or resource group for the following objects:

  • Microsoft.Security/locations/jitNetworkAccessPolicies/initiate/action

  • Microsoft.Security/locations/jitNetworkAccessPolicies/*/read

  • Microsoft.Compute/virtualMachines/read

  • Microsoft.Network/networkInterfaces/*/read

If you want to allow a user to have read access to the JIT policy, you can assign the Security Reader role to the user. If you need a deeper level of customization, you can assign read access for the following objects:

  • Microsoft.Security/locations/jitNetworkAccessPolicies/read

  • Microsoft.Security/locations/jitNetworkAccessPolicies/initiate/action

  • Microsoft.Security/policies/read

  • Microsoft.Compute/virtualMachines/read

  • Microsoft.Network/*/read

Follow these steps to configure JIT VM access in Security Center:

  1. Navigate to the Azure portal at https://portal.azure.com.

  2. In the search bar, type security, and under Services, click Security Center.

  3. In the left navigation pane, in the Advanced Cloud Defense section, click Just In Time VM Access. The Security Center | Just In Time VM Access page appears, as shown in Figure 3-27.

    This is a screenshot of the JIT VM access main page, showing three tabs: Configured, Not Configured, and Unsupported.

    Figure 3-27 JIT VM Access main page

  4. In the example shown in Figure 3-27, there are no VMs configured (on the Configured tab). If you click the Not Configured tab, you should see all the VMs that can support this solution but that have not yet been configured. On the Unsupported tab, you will see all VMs that can’t use this feature, which include VMs that are missing an NSG, classic VMs, or VMs that are in a subscription that is using the free tier.

  5. Click the Not Configured tab, select the VM on which you want to enable JIT, and click Enable JIT On 1 VM button. The JIT VM Access Configuration page appears, as shown in Figure 3-28.

    This is a screenshot of the JIT VM Access Configuration page, which shows the ports that are available for configuring JIT.

    Figure 3-28 Ports available to configure JIT

  6. You can select one of the default ports according to the protocol for which you want to allow access: 22 (SSH), 3389 (RDP), and 5985/5986 (WinRM). You can also click the Add button if you want to customize the port on which you want to allow inbound traffic. For this example, click 3389 and the Add Port Configuration blade appears, as shown in Figure 3-29.

    This is a screenshot of the Add Port Configuration blade, with Configure The Port, Protocol, and Allowed Source IPs options. Also, you can set the Max Request Time, which determines how long the port should be open.

    Figure 3-29 Port configuration

  7. On this blade, you can customize the Port as well as the Protocol type, the Allowed Source IP ranges that are allowed to access (which could be the request IP or a block of IP addresses), and the time range (Max Request Time) for which this rule will stay enabled. After finishing those configurations click the OK button.

  8. If you are not using the other ports, you can select each of the unused ports, click the ellipsis at the end of each port, and select Delete.

  9. Click the Save button to commit the changes.

If you want to see the changes that JIT VM access made to your VM, open the VM’s properties and click Networking. The example shown in Figure 3-30 shows a new rule (the first rule in the list) that was created by JIT to deny access to those ports. Because this rule is managed by JIT, do not make any manual changes to it.

This is a screenshot of the Inbound Port Rules for the VM, with the addition of the Deny rule created by JIT VM Access.

Figure 3-30 Inbound port rules with the addition of the JIT VM access rule

Now that JIT is configured, let’s see how to request access to a VM with JIT enabled. Use the following steps to perform this action:

  1. Navigate to the Azure portal at https://portal.azure.com.

  2. In the search bar, type security, and under Services, click Security Center.

  3. In the left navigation pane, in the Advanced Cloud Defense section, click Just In Time VM Access.

  4. On the Security Center | Just In Time VM Access page, under the Configured tab, select the VM for which you enabled JIT, and click the Request Access button, as shown in Figure 3-31.

    This is a screenshot of the Configured tab, which shows the VM selected and the Request Access option.

    Figure 3-31 Requesting access to a VM using JIT

  5. On the Request Access page, you have the option to select the Port that you want to access, the Allowed Source IP, and the Time Range (Hours), as shown in Figure 3-32.

    This is a screenshot of the Request Access page with the different options to customize the access to the VM.

    Figure 3-32 Customizing the access

  6. Select RDP, leave the other options with the default selection, and click the Open Ports button.

Now you can initiate an RDP session to this VM. When you do that, go back to Security Center and notice that the status of the VM changed to show that the connection is currently active and who initiated this session. See Figure 3-33.

This is a screenshot of the Configured tab, which shows the status update of the configured VM. This status shows that the connection is active and shows the last user who initiated the request.

Figure 3-33 VM status showing details about the connection

In a scenario where a VM has JIT enabled and is located in a subnet with a user-defined route that points to an Azure Firewall as a default gateway, you might experience problems accessing the VM using JIT. This issue happens because of the asymmetric routing behavior, which means that the request comes in via using the virtual machine public IP address, where JIT has opened the access. However, the response (return path) is via the Azure Firewall, which evaluates the request, and because there is no established connection, it drops the packet. In scenarios like this, you need to move the resource to a subnet that doesn’t have a user-defined route.

Configure centralized policy management by using Azure Security Center

Security Center recommendations are derived from Azure Policy. By default, Security Center has an initiative called ASC Default, which is assigned to the subscription once you activate Security Center for the first time. The activation process happens in the background, and it is triggered when you visit Security Center dashboard for the first time.

To recap some important concepts regarding Azure Policy, and how these polices are correlated with Security Center, review the diagram shown in Figure 3-34.

This is a diagram that shows the correlation between Security Center default initiative and Azure Policy.

Figure 3-34 Correlation between Security Center initiative and Azure Policy

The ASC Default initiative has multiple policy definitions that can be accessed individually using Azure Policy. Policy definitions are used to compare the properties of Azure resources with business rules, which in this case are implemented in JSON. Each policy definition in Azure Policy has a single effect, also called policy effect. That effect determines what happens when the policy rule is evaluated to match. Security Center uses the following effects: Audit, AuditIfNotExists, and Disabled. This means that Security Center is not used for policy enforcement, but it is used for security monitoring and compliance visualization. Policy enforcement is covered in “Skill 3.4 Configure security policies,” later in this chapter.

Security Center policies can be customized, which means that if you have a scenario where the organization is using a third-party multi-factor authentication (MFA) solution instead of Azure MFA, you can disable the MFA recommendation in Security Center because this recommendation is based on a check to determine whether you are using Azure MFA. While it is recommended to always use the default settings for these policies, there will be scenarios in which you might have to customize and change the effect from AuditIfNotExists to Disabled.

Implementing centralized policy management

Let’s start reviewing a fictitious scenario for Fabrikam: Consider a scenario in which Fabrikam has a single Azure tenant with multiple business units across the company. Each business unit has its own subscription and follows the policy standards that were established according to its branch, which is based on Fabrikam’s geolocation across the globe. In this scenario, Fabrikam wants to have centralized policy management for all its business units according to the standards followed by each unit’s branch and country.

To accommodate the requirements of this scenario, you should create one Management Group to represent the branch office in each country and move the subscriptions of each business unit in that branch so that it is under this management group. Once you have this structure, you can assign the Security Center policy to the management group level. It would look similar to the diagram shown in Figure 3-35.

This diagram shows an example of centralized management with management groups and subscriptions.

Figure 3-35 Centralized management structure

To make changes to the Security Center initiative, you need to have Security Admin role privileges. You can also make changes if you are the subscription owner. Both Contributor and Reader have access to all Azure Policy operations in which read access is required. Contributors can trigger resource remediation, but they can’t create definitions or assignments. Using the scenario described previously in which multiple business units were required, if you want to restrict users in each business unit to only be able to see (read-only operation) the policies, you can assign them to the Security Reader role.

Because security recommendations in Security Center are derived from Azure Policy, you might have a situation in which you need to customize the policy so that the default effect is Disabled. Consider a scenario in which Fabrikam is using an endpoint protection solution that is not supported by Security Center. Fabrikam keeps receiving the Install Endpoint Protection Solution On Virtual Machines security recommendation. Fabrikam understands that this recommendation is a false positive for its environment because Fabrikam has an endpoint protection installed. However, because it is not supported by Security Center, this recommendation keeps triggering. In this scenario, Fabrikam can change the default effect to Disabled. Use the following steps to configure this change in in Security Center:

  1. Navigate to the Azure portal at https://portal.azure.com.

  2. In the search bar, type security, and under Services, click Security Center.

  3. In Security Center’s main dashboard, under the Policy & Compliance section, click Security Policy.

  4. Click the subscription for which you want to change the policy.

  5. On the Security Policy page, click the View Effective Policy button, as shown in Figure 3-36.

    This is a screenshot of the Security Policy page, which shows the option to visualize the effective policy.

    Figure 3-36 Visualizing the security policy in Security Center

  6. On the next Security Policy page that appears, you will see all policies that are currently in use. This page is mostly for read-only purposes; if you need to make changes to the policy, click the [Preview]: Enable Monitoring In Azure Security Center link for the policy, as shown in Figure 3-37.

    This is a screenshot of the Security Policy page, which shows the [Preview]: Enable Monitoring In Azure Security Center link to access the policy.

    Figure 3-37 Accessing the default policy

  7. On the next page, click the Parameters tab, as shown in Figure 3-38.

    This is a screenshot of the parameters available on the Enable Monitoring In Azure Security Center page.

    Figure 3-38 Accessing the default policy

  8. On this tab is a list of parameters for this initiative, which represents the Security Center recommendations. The goal, in this case, is to disable the missing endpoint protection recommendation. Click the Monitor Missing Endpoint Protection in Azure Security Center drop-down menu and select Disabled.

  9. Click the Review + Save button and then click the Save button to commit the changes.

Make sure to document the changes you made to the default Security Center initiative, specifically regarding policies that have been disabled. Document the rationale behind your reasoning for disabling the policy and who disabled it.

Configure compliance policies and evaluate for compliance by using Azure Security Center

While Security Center recommendations will cover security best practices for different workloads in Azure, there are some organizations that also need to be compliant with different industry standards. Security Center’s Standard tier helps simplify the process for meeting regulatory compliance requirements by using the Regulatory Compliance dashboard.

The Regulatory Compliance dashboard view can help focus your efforts on the gaps in compliance with a standard or regulation that is relevant for your organization. By default, Security Center provides support for the following regulatory standards: Azure CIS, PCI DSS 3.2, ISO 27001, and SOC TSP. Use the following steps to access the Regulatory Compliance dashboard:

  1. Navigate to the Azure portal at https://portal.azure.com.

  2. In the search bar, type security, and under Services, click Security Center.

  3. In the Security Center main dashboard, under the Policy & Compliance section, click Regulatory Compliance; the Regulatory Compliance dashboard appears, as shown in Figure 3-39.

This is a screenshot of the Regulatory Compliance dashboard showing the four out-of-the-box industry standards (Azure CIS, PCI DSS 3.2, ISO 27001, and SOC TSP), each in its own tab.

Figure 3-39 Regulatory Compliance dashboard

The top part of the dashboard shows a brief summary of the assessment’s status and the individual status of each regulatory standard compliance. In the second part of the dashboard are four tabs. The default tab selection is Azure CIS 1.1. The others are PCI DSS 3.2, ISO 27001, and SOC TSP. You can navigate through the items to see which ones need attention (shown in red) and which ones successfully passed the assessment (shown in green). Also, notice that some controls are unavailable. These controls don’t have Security Center assessments associated with them, and no further action is required.

To improve your compliance status, you need to follow the same approach that you did for the security recommendations. In other words, you need to remediate the assessment to comply with the requirements. Assessments are updated approximately every 12 hours, so if you remediate an assessment, you will only see the effect on your compliance data after the assessments run.

In some scenarios, the organization will need to comply with different industry standards. Microsoft is constantly reviewing new standards and making them available in the Azure platform, which means that in addition to the industry standards that come out of the box in Security Center, you can add NIST SP 800-53 R4, SWIFT CSP CSCF-v2020, UK Official and UK NHS, Canada Federal PBMM, and Azure CIS 1.1.0 (new)—an updated version of Azure CIS 1.1.0.

To add a new compliance standard, you need to be the subscription owner or policy contributor. Assuming you have the right privilege, you can just click the Manage Compliance Policies button in the Regulatory Compliance dashboard, and then on the Security policy page, click the subscription to which you want to add the standard. In the resulting page, click the Add More Standards button, as shown in Figure 3-40.

This is a screenshot of the Industry & Regulatory Standards page that allows you to add new industry standards to the Regulatory Compliance dashboard.

Figure 3-40 Adding more standards to the Regulatory Compliance dashboard

After you click the Add More Standards button, you will have the option to click the Add button for each new industry standard available on the list, as shown in Figure 3-41.

This is a screenshot of the Add Regulatory Compliance Standards page with the list of available industry standards that can be added to the dashboard.

Figure 3-41 Available regulatory compliance standards

Once you add the new standard, a new tab will be created in the main Regulatory Compliance dashboard. There are some scenarios in which you might need to send a summary report of your regulatory compliance status to someone. If you need to do this, you can use the Download Report button on the main Regulatory Compliance dashboard.

Skill 3.3: Monitor security by using Azure Sentinel

Azure Sentinel is a Microsoft Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) solution. You can use this solution to ingest data from different data sources, create custom alerts, monitor incidents, and respond to alerts. This section of the chapter covers the skills necessary to configure container security according to the Exam AZ-500 outline.

Introduction to Azure Sentinel’s architecture

To help you to better understand Azure Sentinel’s architecture, first, you need to understand the different components of the solution. The major Azure Sentinel components are diagrammed in Figure 3-42.

This diagram shows the major Azure Sentinel Components. This diagram is divided in two parts: One diagrams the Threat Management, and the other diagrams the Configuration. Each part has its own components, such as leveraging external resources Playbook.

Figure 3-42 Major components of Azure Sentinel

The components shown in Figure 3-42 are presented in more detail in the following list:

  • Dashboards Built-in dashboards provide data visualization for your connected data sources, which enables you to deep dive into the events generated in those services.

  • Cases An aggregation of all the relevant evidence for a specific investigation. It can contain one or multiple alerts, which are based on the analytics that you define.

  • Hunting A powerful tool to investigators and security analytics who need to proactively look for security threats. The searching capability is powered by Kusto Query Language (KQL).

  • Notebooks By integrating with Jupyter notebooks, Azure Sentinel extends the scope of what you can do with the data that was collected. It combines full programmability with a collection of libraries for machine learning, visualization, and data analysis.

  • Data Connectors Built-in connectors are available to facilitate data ingestion from Microsoft and partner solutions.

  • Playbook A collection of procedures that can be automatically executed upon an alert that is triggered by Azure Sentinel. Playbooks leverage Azure Logic Apps, which help you automate and orchestrate tasks and workflows.

  • Analytics Enables you to create custom alerts using Kusto Query Language (KQL).

  • Community The Azure Sentinel Community page is located on GitHub (https://aka.ms/ASICommunity), and it contains Detections based on different types of data sources that you can leverage to create alerts and respond to threats in your environment. It also contains hunting queries samples, Playbooks, and other artifacts.

  • Workspace Essentially, a Log Analytics workspace is a container that includes data and configuration information. Azure Sentinel uses this container to store the data that you collect from the different data sources.

The sections that follow assume that you already have a workspace configured to use with Azure Sentinel.

Configure Data Sources to Azure Sentinel

The first step to configure a SIEM solution such as Azure Sentinel is ensuring that the data relevant for your requirements is ingested. For example, if you need to collect data related to conditional access policies and legacy authentication-related details using sign-in logs, you need to configure the Azure Active Directory (AD) connector. Azure Sentinel comes with a variety of connectors that enable you to start ingesting data from those data sources with just a couple of clicks. Keep in mind that you need to have those services enabled to start ingesting data using these connectors. Use Table 3-1 to identify some use-case scenarios and to determine which connector is available for each scenario:

Table 3-1 Azure Sentinel connectors and use-case scenarios

Scenario

Connector

You need to gain insights about app usage; conditional access policies; legacy authentication-related details; and activities like user, group, role, app management.

Azure AD

You need to get details of operations such as file downloads, access requests sent, and changes to group events, and you need to set mailbox and details of the user who performed the actions.

Office 365

You need to gain visibility into your cloud apps; get sophisticated analytics to identify and combat cyberthreats; and control how your data travels.

Microsoft Cloud App Security

You need to gain insights into subscription-level events that occur in Azure, including events from Azure Resource Manager operational data; service health events; write operations taken on the resources in your subscription; and the status of activities performed in Azure.

Azure Activity

You need to gain visibility about users at risk, risk events, and vulnerabilities.

Azure AD Identity Protection

You need to gain insights into your security state across hybrid cloud workloads; reduce your exposure to attacks; and respond to detected threats quickly.

Azure Security Center

The connectors shown in this table are considered service-to-service integrations. Also, there are connectors to external solutions using API and others that can perform real-time log streaming using the Syslog protocol via an agent. Following are some examples of external connectors (non-Microsoft) that use agents:

  • Check Point

  • Cisco ASA

  • DLP solutions

  • DNS machines - agent installed directly on the DNS machine

  • ExtraHop Reveal(x)

  • F5

  • Forcepoint products

  • Fortinet

  • Linux servers

  • Palo Alto Networks

  • One Identity Safeguard

  • Other CEF appliances

  • Other Syslog appliances

  • Trend Micro Deep Security

  • Zscaler

To configure data connectors, you will need the right level of privilege. The necessary roles for each connector are determined per connector type. For example, to configure the Azure AD connector, you will need the following permissions:

  • Workspace Read and write permissions are required.

  • Diagnostic Settings Required read and write permissions to AAD diagnostic settings.

  • Tenant Permissions Required Global Administrator or Security Administrator roles on the workspace’s tenant.

While this connector has a decent list of permission requirements, some others will be simpler. For example, to configure the Azure Activity connector, you just need read and write permissions in the workspace. The requirements for each connector will be available on the connector’s page in Azure Sentinel.

For this initial scenario, let’s say that Fabrikam wants to ensure that all events from Azure Resource Manager operational data; service health events; write operations taken on Fabrikam’s subscription resources; and the status of activities performed in Azure are ingested in Azure Sentinel. To accomplish that, you need to configure the Azure Activity connector. Follow these steps:

  1. Navigate to the Azure portal at https://portal.azure.com.

  2. In the search bar, type sentinel, and under Services, click Azure Sentinel.

  3. On the Azure Sentinel workspaces page, click the workspace that you want to use with Azure Sentinel; the Azure Sentinel | Overview page appears (see Figure 3-43).

    This is a screenshot of Azure Sentinel Overview page with Events And Alerts Over Time, Recent Incidents, Potential Malicious Events, and Data Source Anomalies graphs.

    Figure 3-43 Azure Sentinel Overview page

  4. In the left navigation pane, under Configuration, click Data Connectors.

  5. On the Data Connectors page, click Azure Activity.

  6. On the Azure Activity blade, click the Open Connector Page button, as shown in Figure 3-44.

    This is a screenshot of the Azure Activity blade with the connector’s status and option to customize the configuration.

    Figure 3-44 Azure Activity blade

  7. On the Azure Activity page, click the Configure Azure Activity Logs link, as shown in Figure 3-45.

    This is a screenshot of the Azure Activity page showing the list of prerequisites for the configuration and Configure Azure Activity Logs option.

    Figure 3-45 Azure Activity data connector configuration

  8. On the Azure Activity Log blade, click the subscription that you want to connect, and in the Subscription blade that appears, click the Connect button, as shown in Figure 3-46.

    This is a screenshot of the Subscription blade with the Connect button available to connect Azure Activity logs from this subscription to Azure Sentinel.

    Figure 3-46 Subscription blade

  9. Once it finishes connecting, click the Refresh button to update the button’s status. You will see that now the Disconnect button is available.

  10. Close the Subscription blade, close the Azure Activity Log blade, and close the Azure Activity connector page.

  11. When you return to the Azure Sentinel | Data Connectors page, make sure to click the Refresh button to update the Azure Activity data connector status.

The core steps to configure Azure Sentinel data connectors are very similar, though depending on the connector, you might need to execute more steps. This is true mainly for external connectors and services in different cloud providers. For example, if you need to connect to Amazon AWS to stream all AWS CloudTrail events, you will need to perform some steps in the AWS account.

Create and customize alerts

After the different data sources are connected to Azure Sentinel, you can create custom alerts, which are called Analytics. There are two types of analytics that can be created: a scheduled query rule and a Microsoft incident creation rule. A scheduled query rule allows you to fully customize the parameters of the alert, including the rule logic and the alert threshold. A Microsoft incident creation rule allows you to automatically create an incident in Azure Sentinel for an alert that was generated by a connected service. This type of rule is available for alerts generated by Azure Security Center, Azure Security Center for IoT, Microsoft Defender Advanced Threat Protection, Azure AD Identity Protection, Microsoft Cloud App Security, and Azure Advanced Threat Protection.

When considering which one you need to utilize, make sure to understand the prerequisites for the scenario because those requirements will determine the type of rule that you need to create. For example, if the requirement is to customize the alert with parameters that will determine the query scheduling and alert threshold, then the best option is the scheduled query rule. For this scenario, Fabrikam wants to create a medium severity alert every time a VM is deleted and an incident should be created for further investigation. Follow these steps to create a scheduled query rule:

  1. Navigate to the Azure portal at https://portal.azure.com.

  2. In the search bar, type sentinel, and under Services, click Azure Sentinel.

  3. On the Azure Sentinel workspaces page, click the workspace that you want to use with Azure Sentinel; the Azure Sentinel | Overview page appears.

  4. In the left navigation pane, under Configuration, click Analytics.

  5. Click the Create button and select the Scheduled Query Rule option. The Analytic Rule Wizard – Create New Rule page appears, as shown in Figure 3-47.

    This is a screenshot of the Create New Rule page with the General tab selected. On this tab, you can define the analytic Name, specify the Tactics, and establish the Severity.

    Figure 3-47 Create New Rule page

  6. In the Name field, type a name for this analytic.

  7. Optionally, you can write a full description for this analytic and select the tactic. The Tactics drop-down menu contains a list of the different phases available in the cyber kill chain. You should select the appropriate phase for the type of alert that you want to create; for this example, select Impact.

  8. The Severity drop-down menu contains a list of all available levels of criticality for the alert. For this example, leave it set to Medium.

  9. Because you want to activate the rule after creating it, leave the Status set to Enabled.

  10. Click the Next: Set Rule Logic button; the Set Rule Logic tab appears, as shown in Figure 3-48.

    This is a screenshot of the Analytic rule wizard page with the Set Rule Logic tab selected. On this tab, you can define the rule query, the map entities, and the query scheduling.

    Figure 3-48 Configuring the rule logic

  11. In the Rule query field, you need to type the KQL query. Because Fabrikam wants to receive an alert when VMs are deleted, type the following sample query:

    AzureActivity
    | where OperationNameValue contains "Microsoft.Compute/virtualMachines/delete"
  12. In some scenarios, you might need to customize the Map Entities options to enable Azure Sentinel to recognize the entities that are part of the alerts for further analysis. For this scenario, you can leave the default setting.

  13. Under Query scheduling, the first option is to customize the frequency with which you want to run this query. Because this scenario does not have a specifically defined frequency, leave it set to run every 5 hours.

  14. Next, you can customize the timeline in which you want to run this query, under the Lookup Data From The Last option. By default, the query will run based on the last 5 hours of data collected. Since in this scenario it was not specifically specified, leave as is.

  15. Under Alert Threshold, you have the Generate Alert When Number Of Query Results drop-down menu. Because this scenario calls for an alert to be generated every time a VM is deleted, you should leave this set to the default setting, Is Greater Than 0.

  16. Under Suppression, you could choose to stop the query after the alert is generated. In this scenario, leave the default selection, which is Off.

  17. Click the Next: Incident settings (Preview) button; the Incident Settings tab appears, as shown in Figure 3-49.

    This is a screenshot of the Analytic Rule WizardCreate New Rule page with the Incident Settings tab selected. From this tab, you can define whether Azure Sentinel will create an incident for this analytic.

    Figure 3-49 Configuring incident settings

  18. Leave the Create Incidents From Alerts Triggered By This Analytics Rule option selected (which is the default setting) because the scenario requires an incident to be created.

  19. Under Alert Grouping, you can configure how the alerts that are triggered by this analytics rule are grouped into incidents. For this scenario, leave the default selection, which is Disabled.

  20. Click the Next: Automated Response button; the Automated Response Tab appears, as shown in Figure 3-50.

    This is a screenshot of the Analytic Rule Wizard page with the Automated Response tab selected. From this tab, you can select a Logic App that contains the automation response for this alert.

    Figure 3-50 Configuring an Automated Response

  21. The Automation Response tab contains a list of all Azure Logic Apps available. In a new deployment, it is common to see an empty tab because there will be no Logic Apps available. You will learn more about automated responses in the next section of this chapter.

  22. Click the Next: Review button, review the options, and click the Create button.

  23. After the rule is created, you will be taken back to the Azure Sentinel | Analytics page; the rule appears in the Active Rules list. If you click it, you will see the parameters of the rule, as shown in Figure 3-51.

This is a screenshot of a custom alert that was created with all the parameters that were specified during the creation.

Figure 3-51 Custom alert after creation

While this rule was created specifically for a particular scenario, you can utilize existing templates, which are located on the Rule Templates tab in the main Azure Sentinel | Analytics page. You can create a scheduled rule type based on different known types of attacks. For example, if you have a scenario in which you need to detect distributed password cracking attempts in Azure AD, you can just create a rule based on the available template, as shown in Figure 3-52.

This is a screenshot of the Analytic Rule WizardCreate New Rule From Template page, showing the General tab with all the options prepopulated for the template that was selected.

Figure 3-52 Creating an alert based on a template

There are other scenarios in which you might need to simply create an incident in Azure Sentinel based on an alert triggered by a connected service. For example, you might want to create an incident every time an alert is triggered from Azure Security Center. The initial steps are the same. The difference is that in step 5 of the earlier instructions, you would select the Microsoft Incident Creation rule. When this option is selected, you will see the Analytic Rule WizardCreate New Rule page, as shown in Figure 3-53.

This is a screenshot of the Analytic Rule WizardCreate New Rule, showing the option to create an analytic based on alert created by a connected service.

Figure 3-53 Creating an alert based on a connected service

In the Microsoft Security Service drop-down menu, you can select the connected service that you want to use as the data source. For example, if you select Azure Security Center from the list and you do not customize the included or excluded alerts, Azure Sentinel will create an incident for all alerts triggered by Azure Security Center.

Configure a Playbook for a security event by using Azure Sentinel

Security Playbooks enable you to create a collection of procedures that can be executed from Azure Sentinel when a certain security alert is triggered. Azure Logic Apps is the automation mechanism behind security Playbooks. Before creating a Playbook, you should have in mind what you want to automate. Before you start configuring a Playbook, make sure to answer at least the following questions:

  • For which alert should I automate a response?

  • What steps should be automated if the conditions for this alert are true?

  • What steps should be automated if the conditions for this alert are false?

You can use the Logic App Contributor role or the Logic App Operator role to assign explicit permission for using Playbooks. To create a Playbook, you will need Azure Sentinel Contributor and Logic App Contributor privileges. For this scenario, Contoso wants to send an email to a distribution list that alerts recipients if a VM has been deleted. Follow these steps to create a Playbook that will be used for this automation:

  1. Navigate to the Azure portal at https://portal.azure.com.

  2. In the search bar, type sentinel, and under Services, click Azure Sentinel.

  3. On the Azure Sentinel workspaces page, click the workspace that you want to use with Azure Sentinel; the Azure Sentinel | Overview page appears.

  4. On the left navigation pane, under Configuration, click Playbooks.

  5. Click the Add Playbook button; the Logic App page appears, as shown in Figure 3-54.

    This is a screenshot of the Logic App page with the options available to create a new automation.

    Figure 3-54 Configuring a Logic App

  6. Select the subscription and resource group where the Logic App will be located.

  7. In the Logic App Name field, type a name for this automation.

  8. In the Location field, select the Azure location where this Logic App will reside.

  9. Optionally, you can push the Logic App runtime events to a Log Analytics workspace. For this scenario, leave the default selection and click the Review + Create button.

  10. On the Review + Create tab, click the Create button.

  11. Click the Go To Resource button to open the Logic Apps Designer page.

  12. Under Templates, click the Blank Logic App tile.

  13. In the Search Connectors And Triggers field, type Azure Sentinel, and select When A Response To An Azure Sentinel Alert Is Triggered.

  14. Click the New Step button; a list of actions appears, as shown in Figure 3-55.

    This is a screenshot of the Logic App page with the options available to create a new automation.

    Figure 3-55 Choosing the initial action to be executed

  15. In the Search Connectors And Actions field, type email and select Office 365.

  16. Select Send An Email (v2); the options shown in Figure 3-56 will appear.

    This is a screenshot of the Logic App trigger for Azure Sentinel with the option to send an email using the Outlook connector.

    Figure 3-56 Entering the email options

  17. Enter the To and Subject parameters for this email.

  18. Click the Body field and click Add Dynamic Content; a floating menu containing the options to add dynamic content appears, as shown in Figure 3-57.

    This is a screenshot of the floating menu with the dynamic content options available to be added in the body of the email.

    Figure 3-57 Dynamic Content options

  19. You can select any dynamic content that you want to add to the body of the email. This helps to enrich the email content by adding alert-related information. For example, you could enter Alert Severity: and add the Severity field from the dynamic content next to the text.

  20. Once you finish adding the dynamic content, click the Save button.

  21. Close the Logic Apps Designer.

  22. Open Azure Sentinel again and click Analytics.

  23. Click the analytic rule that you created, and on the right side, click the Edit button.

  24. Click the Automated Response tab and notice that the Logic App that you created appears in the list, as shown in Figure 3-58.

    This is a screenshot of the Analytic Rule Wizard — Edit Existing Rule, with the Automated Response tab selected.

    Figure 3-58 Playbook selection for an existing rule

  25. Select the Playbook that you created and click the Next: Review > button.

  26. Click the Save button.

Evaluate results from Azure Sentinel

Besides the main overview dashboard available in Azure Sentinel that brings charts and a summary of how the events and alerts, you can also perform direct queries in the Log Analytics workspace or visualize the collected data using Workbooks. If you need to quickly visualize security events, you just need to click the SecurityEvent option in the Events And Alerts Over Time tile; the Log Analytics workspace appears with the query result, as shown in Figure 3-59.

This is a screenshot of the Log Analytics workspace with a query result.

Figure 3-59 Security Events

When accessing the information directly from the Log Analytics workspace you can leverage KQL search to explore further the information that you are trying to find out. This type of approach to query data freely using the Log Analytics workspace is more used in investigation scenarios (reactive).

For more proactive scenarios, one option is to use Azure Workbooks. Azure Sentinel Workbooks provide interactive reports that can be used to visualize your security and compliance data. Workbooks combine text, queries, and parameters to make it easy for developers to create mature visualizations, advanced filtering, drill-down capabilities, advanced dashboard navigations, and more. To leverage a specific Workbook template, you must have at least Workbook Reader or Workbook Contributor permissions on the resource group of the Azure Sentinel workspace.

Using a Workbook is a great choice for monitoring scenarios where you need data visualization through a dashboard with specific analytics for each data source. Another use case scenario is when you want to build your custom dashboard with data coming from multiple data sources.

For example, if you need to evaluate Azure Activity Log data that is being ingested in Azure Sentinel using the Azure Activity connector, you can use the Azure Activity Workbook. In the main Azure Sentinel dashboard, under Threat Management, click Workbooks. Next, click the Azure Activity option and click the View Template button at the right; the Azure Activity Workbook appears, as shown in Figure 3-60.

This is a screenshot of the Azure Activity Workbook with the top 10 active resource groups and their activities over time.

Figure 3-60 Security Events

Leveraging the correct option to evaluate results in Azure Sentinel can help you save time identifying the relevant information.

Incidents

Another way to evaluate results in Azure Sentinel is by looking at incidents. When an incident is created based on an alert that was triggered, you can review this incident in the dashboard, and you can remediate the incident using a Playbook that you previously created. Also, you can investigate the incident.

To access the incidents dashboard, click Incidents under the Threat Management section on the main Azure Sentinel page. Figure 3-61 shows an example of an incident that was created based on the alert that you created earlier in this chapter.

This is a screenshot of the Incidents dashboard in Azure Sentinel showing the list of incidents. At the right is a summary of the incident based on the selection.

Figure 3-61 Visualizing an incident in Azure Sentinel

When an incident is selected, you will see a summary of the incident details in the right pane. As you triage the incident, you can change the incident’s severity, the incident status (for example changing to Active, if it is an ongoing investigation), and assign the incident to an owner. (By default, the owner is shown as Unassigned.) To see more details about the incident, click the View Full Details button. Figure 3-62 shows an example of a full incident.

This is a screenshot of the full Incident page with more details about the incident, potential artifacts, and capability to investigate.

Figure 3-62 A full incident

Depending on the artifacts that are available about the incident, you will also have access to the Investigation dashboard. Notice in Figure 3-62, the Investigate button is disabled because there is nothing else to investigate on this incident. (Deleting an alert is a single action.)

Threat hunting

Threat hunting is the process of iteratively searching through a variety of data with the objective to identify threats in the systems. Threat hunting involves creating hypothesis about the attackers’ behavior and researching the hypotheses and techniques that were used to determine the artifacts that were left behind.

In a scenario in which a Contoso administrator wants to proactively review the data that was collected by Azure Sentinel to identify indications of an attack, the threat hunting capability is the recommended way to accomplish this task. Proactive threat hunting can help to identify sophisticated threat behaviors used by threat actors even when they are still in the early stages of the attack. To access the threat Hunting dashboard, click Hunting in the Threat Management section on the main Azure Sentinel page. Figure 3-63 shows an example of this dashboard.

This is a screenshot of the Hunting dashboard in Azure Sentinel with the predefined queries that will help you to proactively search for threats.

Figure 3-63 Hunting capability in Azure Sentinel

To start hunting, you just need to select the predefined query, which was created for a specific scenario, and click the Run Query button in the right-hand pane. This pane shows a summary of the results. Click the View Results button to see the full details of the query.

Skill 3.4: Configure security policies

While security monitoring is critical for any organization that wants to continue improving their security posture, governance is foundational for any organization that wants to establish standards of deployment and ensure that security is applied in the beginning of the deployment pipeline. This section of the chapter covers the skills necessary to configure security settings using Azure Policy and Azure Blueprint according to the Exam AZ-500 outline.

Configure security settings by using Azure Policy

The first step to achieve governance in Azure is to ensure that you are leveraging Azure Policy for policy enforcement. You can also enforce data residency and sovereignty using Azure Policy. For example, if you need enforce that all new resources be created to use a specific region, you will use Azure Policy to enforce that. As mentioned earlier in this chapter, from the centralized management perspective, it’s always recommended that you assign a policy to a management group and move the subscriptions that you want to inherit that policy to that management group.

There are many built-in roles grant permission to Azure Policy resources. You can use the Resource Policy Contributor role, which includes most Azure Policy operations. The Owner role has full rights to perform all actions and both Contributor and Reader roles have access to all Azure Policy Read operations. You can use the Contributor role to trigger resource remediation, but you can’t use it to create definitions or assignments.

When you are enforcing policies, you need to ensure that your policy initiative is using the right type of effect. If the scenario’s requirement is that you avoid certain workloads to be provisioning if certain attributes are not set, your policy effect should be Deny. The Deny attribute is used to prevent a resource request that doesn’t match defined standards through a policy definition and fails the request.

If your scenario’s requirement is to change parameters if they were not set during provision time, then your policy effect should be DeployIfNotExists. For example, if a Contoso administrator wants to deploy Azure Network Watcher when a virtual network is created, the administrator should enforce the DeployIfNotExists effect for that policy. DeployIfNotExists runs about 15 minutes after a resource provider has handled a create or update resource request and has returned a success status code. When you configure a policy with this type of effect, you also create a remediation task, and the goal of this remediation task is to configure the resource with the parameter that you want.

Another common scenario is to update tags on a resource during creation or update. For example, Contoso administrator needs to update cost center for all resources during the creation time. For this scenario you need to use the Modify effect. Just like the DeployIfNotExists effect, you also need to configure a remediation task to run the desire change. Keep in mind that when you are creating this remediation task for both effects, you will need to check the Create A Managed Identity option. You can use the identity to authenticate to any service that supports Azure AD authentication—including Key Vault—without any credentials in your code.

Follow the steps below to configure policy enforcement using Azure Policy:

  1. Navigate to the Azure portal at https://portal.azure.com.

  2. In the search bar, type policy, and under Services, click Policy.

  3. On the Policy page, click Assignments under Authoring in the left pane. Figure 3-64 shows an example of the Assignments page.

    This is a screenshot of the Policy | Assignment page with the current assignments and initiatives.

    Figure 3-64 Policy assignments page

  4. Notice that on this page, you can assign an initiative or a policy. For this example, click the Assign Policy button. The Assign Policy page appears (see Figure 3-65).

    This is a screenshot of the Assign Policy page showing the different parameters to which you can assign the policy.

    Figure 3-65 Selecting the policy to assign

  5. On the Basics tab, you have the option to select the Scope in which this policy should be assigned. If your scenario requires centralized management, you can change it here to assign to a management group. If the scenario requires that you assign only to the subscription level, then leave the default selection.

  6. In the Exclusion field, you can optionally select resources that you want to exclude from this policy. For example, if you have certain resource groups that should be exempted from this policy, add those resource groups in this list.

  7. In the Policy Definition field, click the ellipsis to open the polices that are available.

  8. On the Available Definitions blade, a list of all policy definitions is shown. For this example, type SQL in the Search field.

  9. Select the Deploy SQL DB Transparent Data Encryption policy and click the Select button.

  10. Notice that both the Policy Definition and Assignment Name fields have been populated with the name of the policy.

  11. Click the Parameters tab and notice that for this policy, there are no parameters or effects.

  12. Click the Remediation tab to configure the additional options. Figure 3-66 shows the available options.

    This is a screenshot of the Assign Policy page showing the available remediation options.

    Figure 3-66 Configuring remediation tasks

  13. Click the Create A Remediation Task check box.

  14. The Policy To Remediate drop-down menu will automatically select the policy that needs to be used for remediation.

  15. Notice that the Create A Managed Identity is automatically selected, and the Managed Identity Location is also selected.

  16. The Permission section also automatically shows that the identity that is used will be given the SQL DB Contributor permission.

  17. Click the Review + Create button.

  18. Click the Create button.

Now that the policy and the remediation task are created, you have the full extent of policy enforcement. You can monitor the compliance of this policy by using the Overview dashboard in Azure Policy, and then click the policy to see more details about the assignment. Figure 3-67 shows the Assignment Details dashboard.

This is a screenshot of the Assignment Details report showing the compliance status.

Figure 3-67 Assignment Details dashboard

In Figure 3-67, notice that the Effect Type is DeployIfNotExists, even though you didn’t have to manually set this effect. That’s because this policy is already preconfigured with this effect only, and if you open the JSON code for this policy, you will see that this effect is hard coded there.

Configure security settings by using Azure Blueprint

Azure Blueprints enable you to define a repeatable set of Azure resources that implement and adhere to an organization’s standards, patterns, and requirements. It is very important for you to understand when to use a blueprint instead of a policy. Blueprints are used to orchestrate the deployment of various resource templates and other artifacts, such as role assignments, policy assignments, Azure Resource Manager templates, and resource groups.

The main difference between a blueprint and a policy is that a blueprint is a package for composing focus-specific sets of standards, patterns, and requirements related to the implementation of Azure cloud services, security, and design. Another characteristic of the blueprint is that you can reuse them to maintain consistency and compliance. A policy can be included in this package as an artifact for the blueprint. Both can be utilized in scenarios where you have multiple subscriptions and want to maintain governance. From the lifecycle perspective, a blueprint has these major stages:

  • Blueprint creation This initial step is where you create the blueprint from scratch (blank) or by using a sample.

  • Draft After creating a new blueprint, the blueprint status changes to draft, which means that it was created, but has not been published yet.

  • Published After finalizing the draft you can publish the first version of the blueprint.

  • Assignment After a blueprint is published, you can assign it to your subscription.

  • Revisions You can change the blueprint versions, which allows you to keep your blueprint up to date.

  • Deletion If you no longer need a blueprint, you can delete the assignment and then delete the blueprint.

You can create a new blueprint based on your scenario’s requirements, or you can create one based on the existing samples available. Follow these steps to create a new blueprint and publish it:

  1. Navigate to the Azure portal at https://portal.azure.com.

  2. In the search bar, type blueprint, and under Services, click Blueprints.

  3. On the Blueprints | Getting Started page, click the Create button in the Create A Blueprint section. The Create Blueprint page appears, as shown in Figure 3-68.

    This is a screenshot of the Create Blueprint page showing the option to create a blank blueprint or a blueprint based on an existing sample.

    Figure 3-68 Create Blueprint

  4. Click the Start With Blank Blueprint option; the screen shown in Figure 3-69 appears.

    This is a screenshot of the Create Blueprint page after selecting create a blank blueprint.

    Figure 3-69 Creating a new blank blueprint

  5. In the Blueprint Name field, type the name of the blueprint.

  6. Click the ellipsis in the Definition Location option and select the subscription that you want to use for this blueprint.

  7. Click the Next: Artifacts button.

  8. On the Artifacts tab, click the + Add Artifact button.

  9. On the Add Artifact blade, select Policy Assignment from the Artifact Type drop-down menu.

  10. On the Policy Definitions tab, select Deploy Log Analytics Agent For Windows VMs and click the Add button.

  11. Click Add Artifact again and select Role Assignment.

  12. In the Role drop-down menu, select Contributor and click the Add button.

  13. Click the Save: Draft button.

  14. On the main Blueprint dashboard, click the Apply button in the Apply To A Scope section.

  15. From the Scope option, click the ellipsis and select the target subscription. You will see the Blueprint Definitions page with the blueprint that you created, which is currently in draft mode, as shown in Figure 3-70.

    This is a screenshot of the Blueprint Definitions page with the blueprint that you created, which is currently in Draft mode.

    Figure 3-70 Existing blueprint in draft mode

  16. Click the blueprint you created, and from the page that opens, click the Publish Blueprint button.

  17. In the Version field, type a version control for this blueprint. Optionally, you can type a note about the changes in this version in the Change Notes field.

  18. Click the Publish button.

Now that the blueprint is created and published, you can assign it to the subscription. To do that, click the Assign Blueprint button in the properties of the blueprint. Figure 3-71 shows an example of this page.

This is a screenshot of the blueprint assignment with the different options available, including the parameters that were established during the blueprint creation.

Figure 3-71 Assign Blueprint

Among those options available in this page, the settings under the Lock Assignment section are very important because the selection will depend on the scenario’s requirement. The available locks are

  • Don’t Lock This means that resources are not locked by this blueprint. Users, groups, and service principals with permissions can modify and delete deployed resources.

  • Do Not Delete Although this type of lock is not supported by all resources, this lock allows resources to be modified but not deleted, even by subscription owners. Keep in mind that it might take up to 30 minutes for this blueprint lock to be enforced.

  • Read Only As the name implies, the resources can’t be modified in any way, nor can they be deleted, not even by the subscription owner. This type of lock is not supported by all resources.

Resource locks deployed by Azure Blueprints are only applied to resources deployed by the blueprint assignment. This means that existing resources, such as those in existing resource groups are not affected since they don’t have locks added to them. You can remove locking states by either changing the blueprint assignment’s locking mode to Don’t Lock or by deleting the blueprint assignment.

The Artifacts Parameters setting provides an option to type the parameters that were established during the blueprint creation. When you finish filling all those parameters you can click the Assign button. When you are finished making assignments, you can see the assignment under Assigned Blueprints in the left navigation pane, as shown in Figure 3-71.

This is a screenshot of the Assigned Blueprints option with the list of blueprints, the version, and the provisioning state.

Figure 3-72 Assigned Blueprints

Thought experiment answers

This section contains the solution to the thought experiment.

  1. 1. Azure Security Center and Azure Activity Log.

  2. 2. Just-in-Time VM Access.

  3. 3. First, you need to enable Advanced Data Security (ADS) on your SQL databases.

Chapter summary

  • Azure resources logs register operations that were executed at the data plane level, while activity logs at the subscription level register operations that were executed in the management plane.

  • You can customize alerts in Azure Monitor for different data types, including metrics, log search queries, and activity logs events.

  • Monitoring solutions leverages services in Azure to provide additional insight into the operation of an application or service.

  • Azure Security Center Standard tier provides built-in vulnerability assessment using native integration with Qualys.

  • To enable vulnerability assessment for SQL, you first need to enable the SQL Advanced Data Security (ADS) feature.

  • To implement centralized policy management in Azure Security Center, you should assign the ASC Default initiative to the Management Group level.

  • The regulatory compliance dashboard in Azure Security Center can be customized to add other standards that are not available out of the box.

  • To ingest data from different data sources into Azure Sentinel, you can use service-to-service connectors or external connectors.

  • Azure Blueprints enable you to define a repeatable set of Azure resources that implements and adheres to an organization’s standards, patterns, and requirements.