CHAPTER 6

Databases

In this chapter, you will learn how to

•   Create and manage databases

•   Use advanced database features

•   Use the autonomous database services

•   Migrate databases to the Cloud

Oracle Cloud Infrastructure (OCI) offers a wide selection of database services. You could provision compute instances to set up a database on OCI the same way traditional on-premises database administrators (DBAs) have always done it, by fulfilling the prerequisites such as deploying a certified operating system, security hardening and patching the OS, downloading and staging the Grid Infrastructure (GI) and database install media and any patches, preparing the operating system kernel parameters, setting up GI and then the database binaries, and finally creating a database. Or you could use Database Cloud Services (DBCS) to choose the compute shape, storage, and GI and DB versions, and let OCI’s cloud automation complete the tedious heavy lifting behind the scenes while you grab a coffee, and then come back later and connect to your brand-new database.

Figure 6-1 contextualizes the OCI database services that are discussed in this chapter.

Images


Figure 6-1   OCI storage topology

DBCS is a PaaS offering that provides you with a fully functional and deployed Oracle database platform on a virtual machine (VM), bare metal (BM), or Exadata server. Exadata is Oracle’s flagship engineered systems platform explicitly designed for hosting clustered highly available and high-performance Oracle databases. DBCS significantly simplifies database instance management, including taking backups, performing restores, and applying patches.

Once you get started with DBCS, this chapter covers several advanced topics, including licensing, disaster recovery, encryption, and clustering options before it tackles the often daunting task of migrating your non-OCI databases to the Cloud. It is not complicated and many migration options are available.

This chapter aims to demystify the available cloud database options and recommend appropriate use-cases for the different database services. It closes with a detailed examination of the leading-edge Autonomous Data Warehouse (ADW) and Autonomous Transaction Processing (ATP) services. ADW and ATP are truly next-generation cloud database services leveraging machine learning to automate previously manual tuning tasks performed by DBAs to optimize performance.

Images

NOTE    Oracle sometimes updates the abbreviations it uses in its documentation. For example, OCI was previously known as BMCS (Bare Metal Cloud Services), so be warned that the abbreviations referenced in this book are subject to future change. Autonomous Data Warehouse is inconsistently abbreviated as ADW as well as ADWC (Autonomous Data Warehouse Cloud). In this book, the abbreviation ADW will be used to refer to the Autonomous Data Warehouse Cloud service.

You do not have to be an experienced Oracle database professional to get through this chapter, but if you are, buckle up, because the database services offered by OCI are a radical departure from the complexity typically associated with setting up traditional Oracle database environments.

Database Cloud Service

DBCS is available on VM, BM, and Exadata servers. Each hosting environment serves a different use.

Before this chapter explores the differences in the hosting environments, here is a quick refresher on parlance for folks less familiar with Oracle databases:

•   The Oracle server consists of a database instance and a set of database files. The database instance is a set of memory structures and background processes. The Oracle server is usually just referred to as the Oracle database.

•   When you license the database, you have access to some core features. Many additional options and packs may be enabled on Enterprise Edition (discussed later), and these are usually licensed additionally. For example, to use partitioned tables in your database, you must enable the Partitioning option. Other options include Multitenant, Advanced Compression, Advanced Security, Advanced Analytics, Database Vault, and Real Application Clusters (RAC).

•   Real Application Clusters (RAC) is an option that allows a set of database files to be mounted by multiple database instances concurrently. RAC relies on shared storage accessible by each instance participating in the cluster. The RAC option provides high availability for a database because an instance on a node may fail, but the database remains accessible through the instances running on the remaining nodes in the cluster. RAC also provides horizontal scaling. RAC is included with all Exadata Cloud services and is optional on VM-based DBCS. But you cannot build your own RAC deployment using OCI IaaS.

•   Oracle Enterprise Manager (OEM) is a powerful monitoring and management suite widely used to manage Oracle environments.

•   Many core OEM features are available at no cost, but there are additional management packs that may be enabled that require additional licenses.

•   The Oracle database software has traditionally been available in either a Standard Edition (SE2 hereafter SE) or Enterprise Edition (EE). EE has many features and options that are not available on SE, notably the physical replication option known as Data Guard. OCI offers two additional editions: EE High Performance (EE-HP) and EE Extreme Performance (EE-EP).

•   EE-HP bundles additional database options on top of EE as well as several OEM management packs. The additional database options include Multitenant, Partitioning, Advanced Compression, Advanced Security, Label Security, Database Vault, OLAP, Advanced Analytics, and Spatial and Graph, while the additional OEM packs include Database Lifecycle Management Pack and Cloud Management Pack for Oracle Database.

•   EE-EP adds the In-Memory Database, Active Data Guard, and RAC database options to EE-HP.

•   The ORACLE HOME is a file system location where the Oracle database software is installed. The executable programs and all supporting software residing in the ORACLE HOME are collectively and colloquially referred to as Oracle binaries.

•   ASM, or Automatic Storage Management, is a volume manager used to manage disk storage that runs as a specialized Oracle instance. Storage LUNs are carved into ASM disks, which make up ASM disk groups. Disk groups are not visible externally from the OS. Database files, including data files, redo logfiles, and control files, usually reside in ASM disk groups. DBCS creates two disk groups: DATA and RECO, and optionally a SPARSE disk group on Exadata. The DATA disk group is typically used for datafiles, redo logfiles, and control files, while the RECO disk group typically stores recovery-related files such as archive logfiles, flashback logs, and sometimes RMAN backups. The optional SPARSE disk group on Exadata is used for snapshot databases, which are essentially thin-provisioned database clones. ASM may be used to provide shared storage for RAC instances.

•   GI, or Grid Infrastructure, is specialized Oracle software used for supporting databases that use ASM for storage and provides cluster services used by RAC databases and the Oracle Restart feature that improves database availability by automatically restarting various Oracle components.

•   ASM File System (ACFS) is a general purpose file system mounted on ASM that is accessible by standard OS tools. Database files may also be stored on ACFS.

•   ASM redundancy refers to the number of copies of data maintained by ASM across the available ASM disks. Three redundancy levels are supported: EXTERNAL, NORMAL, and HIGH, mapping to 0, 1, and 2 additional copies of data maintained respectively. DBCS only supports NORMAL and HIGH ASM redundancy levels.

•   RMAN, or Recovery Manager, is an Oracle database utility used for performing backup, restore, and recovery operations.

Images

EXAM TIP    Transparent Data Encryption (TDE) is a feature of the Advanced Security option. However, on DBCS, all database editions include TDE.

Bare metal shapes are single-node servers and support only single-instance databases, whereas you can run a Real Application Clusters (RAC) database on a two-node VM environment or on an Exadata environment. All single-node database systems such as DBCS on bare metal or DBCS on a single VM node support the following database editions:

•   Standard Edition (SE)

•   Enterprise Edition (EE)

•   Enterprise Edition—High Performance (EE-HP)

•   Enterprise Edition—Extreme Performance (EE-EP)

Images

NOTE    RAC database systems on DBCS require Enterprise Edition—Extreme Performance. While Standard Edition is the cheapest database edition, there are restrictions in terms of the shapes it supports—no more than eight OCPUs on bare metal shapes and a maximum of VM.Standard.x.8 (where x is the hardware generation number) for VMs.

The database versions supported by DBCS will change over time. As of this writing, the supported versions range from Oracle database 11g Release 2 to 19c. As new versions become available and older versions are deprecated, the range of supported versions will change.

DBCS on Bare Metal

A bare metal database system consists of a single bare metal server preinstalled with Oracle Linux (6.8 as of writing), with locally attached NVMe storage. Earlier bare metal servers with limited availability are Oracle X5-2 servers providing the BM.DenseIO1.36 shape, while the current bare metal servers available at the time of this writing are Oracle X7-2 servers providing the BM.DenseIO2.52 shape. As hardware refreshes occur, the available bare metal hardware will be updated. Table 6-1 summarizes some key features of these bare metal shapes.

Images

Table 6-1 DBCS Bare Metal Shapes

Note the raw storage listed in Table 6-1 for each of the DBCS bare metal shapes. The raw storage forms the basis for two ASM disk groups, named DATA and RECO respectively. Database files typically reside in the DATA disk group while recovery-related files typically reside in the RECO disk group. Table 6-2 shows the usable storage is less than the raw storage due to ASM redundancy. DBCS supports Normal (two-way mirroring) and High (three-way mirroring) ASM redundancy settings. There is a significant trade-off to be made with regard to usable storage when choosing a HIGH redundancy level, but the benefit is that there are three copies of each ASM segment and your database is strongly protected against disk failures, tolerating the loss of two disks simultaneously, whereas NORMAL redundancy allows the database to transparently tolerate the loss of one disk with no impact to availability.

Images

Table 6-2 Impact of ASM Redundancy Level on Usable Disk Group Storage

The NVMe disks present in a bare metal DBCS instance are locally attached. These are not part of a SAN and have no external mirroring or striping configured. Therefore, you have no choice but to use at least the NORMAL level of ASM redundancy.

When you install Oracle database software, the binaries are located in a directory structure known as an Oracle home. The software in an Oracle home may be used to create a database. You may have many Oracle home directories each with an independent set of binary files. Traditionally, different versions of the database software are installed into different Oracle homes, while one or more databases are created and run using the software from the same Oracle home.

However, DBCS on bare metal deploys an Oracle home on a node and allows you to create only a single database per Oracle home. You may have multiple Oracle homes, but each one may drive only a single database. At first glance, this may appear restrictive, but constraining the Oracle home to run a single database exposes only a single database at a time to service restrictions caused by software failure or patching activities.

The console, CLI, and SDKs may be used to perform the following actions on DBCS databases:

•   Create a new database with no user data

•   Create a new database from a pre-existing backup

•   Delete a database

•   Patch a database

•   Create a database backup

•   Restore database from a backup

•   Set up a Data Guard standby database

Enabling these common DBA tasks through reliable Cloud automation is an extremely compelling factor behind the success of DBCS.

Let us walk through some of the options available when creating a DBCS bare metal database. Using the OCI console, navigate to the Bare Metal, VM, and Exadata option in the Database section of the menu and choose Launch DB System. Figure 6-2 shows a guided template for setting up your new DB system on bare metal.

Images


Figure 6-2   Create a bare metal DB system.

A display name for the DB system is required as well as the Availability Domain (AD). Database systems are AD-specific resources as they require a physical server in an AD on which to run. The bare metal shape type exposes two options: BM.DenseIO2.52 and BM.DenseIO1.36 (as of this writing). The total node count is grayed out and defaults to 1 for bare metal systems, so RAC databases are not an option. One of the four DB software editions must be chosen. This determines the edition for all databases that may be created on this system. Adjust the CPU core count in multiples of 2 up to 52 (in this case) depending on your requirements and budget. The wizard further prompts you to specify two storage attributes, as shown in Figure 6-3.

Images


Figure 6-3   Storage and network specification for a bare metal DB system

You may specify either 40% or 80% of the available storage to assign to the DATA disk group for database files and user data. The remaining storage is assigned to the RECO disk group for recovery-related files including online and archive redo logs and RMAN backups. One of the ASM disk group redundancy levels, High or Normal, is also specified. These two settings have a dramatic impact on available storage. The fault domain is important if you plan to set up another bare metal DBCS instance as a Data Guard standby in the same AD. The primary and standby systems should reside in different fault domains in the same AD to improve resilience to rack failures.

DBCS will deploy a specialized compute node in a VCN you specify that is attached to a client subnet. An IP address from this subnet pool is allocated to the DB system. A hostname prefix, up to 16 characters and unique within the specified subnet, may be chosen if the subnet uses the OCI VCN resolver for DNS resolution. The last section under consideration while provisioning a DB system on bare metal pertains to the actual database. Figure 6-4 shows the specification of a DB named prod.

Images


Figure 6-4   Database specification for a bare metal DB system

You may choose a currently supported database version. If the DB software version supports multitenant databases, you may optionally provide a pluggable DB (PDB) name. The strong DB admin password you provide is used for the SYS and SYSTEM users as well as the TDE wallet and PDB Admin. The database workload options influence the starter database’s initialization parameters to favor either OLTP or DSS workloads. Finally, you may specify the character set and national character set for your database. Choosing Launch DB System kicks off an orchestration process that serves you up a fully operational deployed database system on a bare metal server a few hours later.

DBCS on Exadata (Exadata Cloud Service)

Exadata is a uniquely Oracle offering that is not available on any other public cloud platform. DBCS places Exadata within reach of many more customers than ever before. The Exadata platform is stable and mature and is the Oracle flagship engineered systems product built with redundancy and high-performance components at its core. An Exadata system consists of compute nodes, storage cells and networking infrastructure (all three core IaaS components) preconfigured and tested to host database workloads. Exadata system software unlocks several unique database software optimizations that are not available in non-Exadata systems that include SmartScan and Storage Indexes. Other features such as Hybrid Columnar Compression, while not unique to Exadata, are only available on OCI through the Exadata Cloud Service.

Exadata systems are well suited for clustered (RAC) databases. Each system has at least two compute nodes and three storage cells. These serve as clustered database instances, one per node making the system tolerant to a compute node failure. Each compute node and storage cell in the system is connected using Infiniband network infrastructure providing high-performance internode cluster communication and high IO bandwidth to the storage cells. Each compute node is an x86 computer with dual socket multicore CPUs, memory, and some local storage. Each storage cell is a full-blown computer, unlike traditional storage arrays. They have multicore CPUs, memory, and two types of storage, PCIe flash cards to support the flash cache and serial attached SCSI (SAS) disks available in either high capacity (HC) or high performance (HP) configurations.

Three Exadata configurations are available on DBCS:

•   Quarter rack consisting of 2 compute nodes, 3 storage servers

•   Half rack consisting of 4 compute nodes, 6 storage servers

•   Full rack consisting of 8 compute nodes, 12 storage servers

When an Exadata is deployed on premises, you choose to deploy a bare metal or virtualized software image. You have full control and access to all hardware infrastructure.

Exadata on DBCS (commonly referred to as ExaCS) is deployed with a virtualized image and when you create an Exadata system. You choose the shape of your Exadata VM (Quarter, Half, or Full rack) and you specify the number of CPU cores to enable. You may dynamically scale the CPU cores allocated to your Exadata VM up to the limit allowed by the Exadata shape you chose. If you wish to move to a larger shape, an SR must be opened with Oracle to assist with this request. You have root access to the VM but you have no access to the networking and storage infrastructure.

Depending on the OCI region, you may have access to both generation 1 (X6) or 2 (X7) Exadata systems. Third-generation Exadata, based on X-8 hardware, is generally available and will be used to refresh the Exadata servers in OCI ADs.

When deploying an Exadata system on DBCS, you have similar options to bare metal deployments but there are a few nuances. Figure 6-5 shows the shape chosen as Exadata.Full2.368.

Images


Figure 6-5   Create an Exadata DB system.

This system uses a full rack, second-generation (X7-8) Exadata system with eight compute nodes. You choose to enable CPU cores (up to 368 with this shape) and provide a RAC cluster name. When this database is created, there will be eight RAC instances mounting and opening the database stored across the twelve storage servers. Notice that you cannot change the Oracle database software edition. On Exadata, the only option is to use Enterprise Edition—Extreme Performance (EE-EP) edition.

Two other main configuration differences between deploying a database system and bare metal relate to storage and network configuration. The storage allocation to ASM disk groups DATA, RECO, and SPARSE is a function of whether you choose to store backups on Exadata storage and whether you want a SPARSE disk group configured. Table 6-3 summarizes the impact these choices have on Exadata disk group storage allocation as a percentage of total available storage, which depends on the shape you choose.

Images

Table 6-3 Exadata Disk Group Storage Allocation Based on Backup and SPARSE Disk Group Configuration

In Figure 6-6, the disk group storage allocations are indicated as a function of choosing both to store backups on the Exadata storage and to configure a SPARSE disk group. Notice that the ASM disk group redundancy is set to high (three-way mirroring), and this cannot be changed on Exadata. Remember—this system is engineered for redundancy.

Images


Figure 6-6   Storage and network specification for an Exadata DB system

The network settings for an Exadata system on DBCS provide for two subnet specifications. The client subnet refers to the network to be used for client connections such as from your applications like SQL Developer. The backup subnet specifies the network dedicated for traffic related to IO intensive backup and restore operations.

Images

EXAM TIP    Usable storage for database files on Exadata is impacted by whether you choose to keep backups on the storage. Configuring SPARSE disk groups also reduces the storage available for database files.

DBCS on VM

DBCS systems on VMs offer very flexible options with many underlying compute shapes and the ability to scale the storage allocated to your VM dynamically. Both single-instance and two-node RAC databases are supported on a 1-node and 2-node VM DB system respectively. OCI is the first PaaS platform to support Oracle RAC databases on VM and Exadata systems.

The database software edition chosen when creating a DBCS system on a VM cannot be changed, just like with bare metal DB systems. However, DBCS on a VM is restricted to a single database Oracle Home that can host only one database, unlike a bare metal DB system that allows multiple database Oracle Homes. The multitenant database option, available with the Enterprise Edition High Performance and Extreme Performance (EE-HP and EE-EP) editions, supports a single container database (CDB), which may host multiple pluggable databases (PDBs), thus allowing multiple databases (PDBs) to run using DBCS on VM.

Images

CAUTION    If you implement RAC on two VM nodes to provide a highly available clustered system, bear in mind that the underlying compute nodes are still VMs and may potentially share some physical infrastructure. To ensure that the same network, storage, or physical server failure does not affect both nodes, place each node in a different fault domain.

Network Requirements for DBCS

DBCS on VM and bare metal systems require several network resources. You should be familiar with the networking concepts described in Chapter 3. This discussion does not apply to DBCS on Exadata, which has a very specific network configuration.

DBCS provisions a compute instance. Compute instances require a subnet in a Virtual Cloud Network (VCN). IP addresses are allocated to the compute node from the subnet. Therefore, a DB system requires a VCN with at least one subnet, which is either AD-specific or regional. The subnet may be private or public. The route table and security list used by the subnet also requires some configuration to support secure database traffic routing. An Internet gateway may be attached to the VCN to support access over the Internet. A Dynamic Routing Gateway (DRG) may be attached to the VCN to support access over FastConnect or IPSec VPN. A service gateway to access object storage is also commonly attached to VCNs. Object storage, described in Chapter 5, is frequently used in DB systems for storing database backups, patches, migration-related data files, and other software repositories.

This combination of network resources supports many different topologies, but two common architectures are described next.

Public Subnet with Internet Gateway

Database systems that must be accessible over the Internet require the following resources:

•   Public subnet   Oracle recommends a regional subnet but an AD-specific subnet works as well.

•   Internet gateway   Used for routing traffic between the subnet and the Internet.

•   Service gateway   Used for routing traffic between the object storage service and the subnet.

•   Route table   Two rules are required. One for egress 0.0.0.0/0 and targeting the Internet gateway and the second for the object storage service in your region targeting the service gateway.

•   Security list   Two security lists are recommended but you could customize one. Oracle recommends the default security list in your VCN with basic rules required by the DB system, like ingress SSH and general egress from the DB system as well as a custom security list catering to SQL*NET traffic if DB clients are allowed to connect, and recommends a rule to allow Oracle Notification Services (ONS) and Fast Application Notification (FAN) Event traffic. ONS and FAN are Oracle database services. For monitoring your database, ports for Oracle Enterprise Manager Database Control or EM Express may also need to be opened.

Images

CAUTION    It is unusual to place a database on a publicly accessible server so be wary of unintentionally exposing your databases.

Private Subnet with Dynamic Routing Gateway

Database systems that must be not be accessible over the Internet require the following resources:

•   Private subnet   Allocated private IP addresses to the DB system nodes.

•   Dynamic routing gateway   This is required if you are connecting to another network, such as your on-premises network to allow other clients or systems to connect to the database.

•   Service gateway   To reach object storage.

•   NAT gateway   Required to access public endpoints not supported by the service gateway.

•   Route table   Several rules may be required. One for egress 0.0.0.0/0 and targeting the NAT gateway, another for the object storage service in your region targeting the service gateway, another for the on-premises network CIDR targeting the DRG.

•   Security list   As for the previous option, two security lists are recommended but you could customize one. Rules may include ingress SSH on port 22 and SQL*NET on port 1521 and general egress from the DB system. Oracle recommends a rule to allow ONS and FAN Event traffic on port 6200.

Exercise 6-1: Configure a Public Subnet with Internet Gateway for Your DB System

In this exercise, you create a new VCN with a public subnet; you then create a service gateway and configure your default route table and security list. You may use an existing VCN and adapt the exercise accordingly.

1.   Sign in to the OCI console and choose your compartment.

2.   Navigate to Networking | Virtual Cloud Networks and choose Create Virtual Cloud Network. Specify a name, like demoVCN, and a compartment, and choose to create a VCN only. Specify a CIDR block, like 10.0.0.0/16, check Use DNS Hostnames In This VCN, and choose Create Virtual Cloud Network.

Images

3.   Navigate to Networking | Virtual Cloud Networks | demoVCN and choose Create Subnet. Specify a name, subnet type (regional or AD-specific), and a CIDR block that is part of the VCN CIDR chosen earlier, like 10.0.0.0/24. Choose the VCN’s default route table, security list, and DHCP options. Ensure that the subnet is public, check Use DNS Hostnames In This Subnet, and choose Create Subnet.

Images

4.   Navigate to Networking | Virtual Cloud Networks | demoVCN | Service Gateways and choose Create Service Gateway. Specify a name, like SG1, and a compartment, and choose the region-specific object storage service. Then choose Create Service Gateway.

Images

5.   Navigate to Networking | Virtual Cloud Networks | demoVCN | Internet Gateways, and choose Create Internet Gateway. Specify a name, like Internet Gateway demoVCN, and a compartment, and choose Create Internet Gateway.

Images

6.   You now have a VCN, a subnet and service, and Internet gateways. It’s time to configure the routing. Navigate to Networking | Virtual Cloud Networks | demoVCN | Route Tables and choose Default Route Table for demoVCN. Select Add Route Rule. Choose Internet Gateway as the target type, 0.0.0.0/0 as the destination CIDR block, and your compartment; then choose the target Internet gateway created previously and select Add Route Rules. Add another route rule, selecting Service Gateway as the target type, your OCI region-specific object storage as the destination service, and your compartment, and select the service gateway and choose Add Route Rules.

Images

7.   Next you add an ingress rule for incoming SQL*NET traffic to the security list. Navigate to Networking | Virtual Cloud Networks | demoVCN and choose Default Security List for demoVCN. Select Add Ingress Rules and specify a stateful rule that allows incoming traffic to TCP port 1521. You may restrict the source CIDR range to specific IP addresses or expose this to the public Internet, which is not something typically done on production environments. Select Add Ingress Rules.

Images

8.   The last networking prerequisite to be met is to add an egress rule allowing instances in the VCN to access the object storage service. Navigate to Networking | Virtual Cloud Networks | demoVCN | Default Security List for demoVCN | Egress Rules. Select Add Egress Rules, choose Service as the Destination Type, and specify a stateful rule that allows outgoing traffic to your regional object storage service on secure TCP port 443. Select Add Egress Rules.

Images

You are all set to create a new DB system on a VM in this VCN.

IP Address Space and DNS Requirements for Bare Metal and VM DB Systems

DB systems must conform to several network guidelines.

•   Always specify a VCN domain DNS label, a subnet domain DNS label, and a hostname prefix when creating a DB system. When combined, these values constitute the node’s fully qualified domain name (FQDN).

•   On clustered RAC systems, the node number is appended to the hostname prefix automatically to create unique, numbered FQDNs for each RAC node.

•   FQDN resolution is provided by the default Internet and VCN Resolver along with resolution of important resources including SCAN names for databases and the Oracle YUM repository endpoints.

•   When creating a RAC system, Oracle Clusterware on each node communicates over a private interconnect using the CIDR range 192.168.16.16/28. Ensure that your subnet does not overlap with this CIDR block as this will cause problems with the interconnect.

•   Ensure that the IP address space of VCNs in multiple regions do not overlap to simplify communication between DB systems in different VCNs.

•   Single-node bare metal or VM database systems require at least four IP addresses in a subnet, so the minimum subnet size is /30. Three IP addresses are reserved for each subnet by OCI networking services and one IP address is required for the DB node.

•   A two-node RAC VM cluster requires ten IP addresses. Each node requires two IP addresses. The SCAN listener requires three IP addresses and three IP addresses are reserved for each subnet by OCI networking services. For ten IP addresses, the minimum subnet size is /28, which can allocate sixteen IP addresses.

Exercise 6-2: Create a DB System on a VM

In this exercise, you create a DBCS system on a VM using the OCI console. Note that database licensing is discussed later in this chapter. However, here are some basics to get you through this exercise. If your organization has unused licenses, you may choose the BYOL option. If not, be prepared to be charged for the license. Bearing this in mind, the cheapest options are recommended in this exercise.

1.   Sign in to the OCI console and choose your compartment.

2.   Navigate to Bare Metal, VM, and Exadata and choose the Launch DB System.

3.   Provide a display name, like DEMODB, and choose an AD where resources such as the VM compute instance underlying the database will be created.

4.   Explore the relationships between shapes and edition. Notice that if you choose bare metal or Exadata as the shape type, your shape options change accordingly. If you choose a shape that supports RAC, such as VM.Standard2.2, you may increase the node count to 2, causing database software edition to automatically move to EE-EP. If you set the software edition to SE, your choice of available VM shapes is reduced to those with eight OCPUs or less.

5.   Choose a VM shape type and an appropriate shape, like VM.Standard2.1. Notice, the total node count is 1. You may not create a RAC database system using this VM shape.

Images

6.   Choose a software edition, such as Standard Edition, and your license type—either with the license included (OCI cost is higher) or bring your own license (BYOL). When the instance is not in use, you may stop it and you will be charged only for the total storage allocated, so keep this in mind when choosing the initial storage allocation. For an initial available storage request of 256GB, you will be charged for total storage (grayed-out) of 712GB. DBCS on VM storage is block storage, and once the node is created you may investigate exactly how the total storage is allocated. The primary components are two 256GB ASM disks for the DATA and RECO disks and about 200GB for the /u01 filesystem. Specify an SSH public key as you would for any compute instance.

Images

7.   Expand the Advanced Options and optionally choose a fault domain. In a RAC database system, it is important to specify multiple fault domains. Specify the network information for this DB system. In this example, a public subnet is chosen, but databases would typically reside in a private subnet. Specify a hostname prefix. This will be resolvable by the default Internet and VCN Resolver.

Images

8.   Specify a database name and choose a version, for example demodb and 18.3.0.0. By default, only the base releases are listed. Checking the Display All Available Versions checkbox expands the list to also include versions that have been patched or updated.

9.   Optionally, specify a PDB name. By default, unless the version chosen is 11.2.0.4, the database will be created as a CDB. Specify the DB admin password that is set for the SYS and SYSTEM users as well as the TDE wallet and PDB Admin.

Images

10.   Don’t check the Enable Automatic Backup checkbox.

11.   The database workload section may be either OLTP or DSS. If Standard Edition is chosen, then this defaults to OLTP. This choice influences whether the database configuration is biased toward transactional workload with lots of random data access activity (OLTP) or toward a decision support system (DSS) workload that typically has large scanning activity.

12.   The Advanced options enable you to specify the database character set and national character set. Choose Launch DB System to begin the provisioning process. This takes a while and timing varies.

Images

13.   The provisioning usually completes in under an hour. DBCS creates a compute instance, known as a database node, based on the shape you choose and lays out an Oracle Linux operating system, with packages and configuration required for a database system. Block storage is allocated as per your storage specification and attached to this instance. This block storage is not visible through the Block Storage | Block Volumes console interface or through the API. Grid Infrastructure is deployed. A database is created as per your choices.

14.   Navigate to Bare Metal, VM, and Exadata, and Choose DEMODB. The resources deployed include one node, one database, and any patches that are tested and available for you to apply. The DB system information is summarized and the demodb is listed here.

Images

Images

EXAM TIP    You cannot run a clustered database (RAC) using DBCS on bare metal. RAC databases are supported on two-node VM systems and on Exadata.

Exercise 6-3: Connect to the Database System with SSH, SQL*Plus, and asmcmd

In this exercise, you connect to the database node and connect to the database using the SQL*Plus utility.

1.   Sign in to the OCI console and choose your compartment.

2.   Navigate to Bare Metal, VM, and Exadata | DEMODB | Nodes and take note of the public IP address of the host.

Images

3.   Using the private key paired with the public key you provided when creating the DB system, initiate an SSH connection to this server using an SSH client like PuTTY.

4.   Once connected as user opc, confirm the hostname and the operating system version, examine the filesystem layout, check for any Oracle PMON processes, and get familiar with your new database server.

Images

5.   Notice that there are PMON background processes for the ASM instance +ASM1, the ASM proxy instance +APX1, and the demodb database instance. This is the database instance we wish to connect to so switch the user from opc to oracle, confirm if the ORACLE_SID variable is set correctly, and connect using SQL*Plus.

Images

6.   Obtain the TNS connection entry that is preconfigured in the database home directory. The default location is $ORACLE_HOME/network/admin/tnsnames.ora.

Images

7.   Notice in Step 4 that the ASM instance runs as the Linux user named grid. Switch the user back to opc; then switch user to grid, and figure out the directory location of the grid ORACLE home.

Images

8.   Set the required environment variables and run the asmcmd list disk groups command. Notice that 256GB is allocated to both the DATA and RECO ASM disk groups. Your output may differ if you chose a larger storage allocation when you created the DB system.

Images

Exercise 6-4: Connect to the Database System with SQL Developer

In this exercise, you connect to the database instance using the SQL Developer tool from your PC across the Internet to this database. SQL Developer is freely downloadable from Oracle and this exercise assumes you have already installed SQL Developer or a similar Oracle client.

1.   In SQL Developer, choose File | New | Database Connection, and select OK.

2.   Provide a connection name, like demodb. Supply a database username. The default administrator accounts SYS and SYSTEM may be used. If using SYS, change the role from default to SYSDBA.

3.   If you add a TNS entry to your client’s TNS resolution mechanism, usually a local tnsnames.ora file, you can change the connection type from Basic to TNS and select the network alias from a drop-down list. With a Basic connection type, you are required to provide a hostname, port number, and SID or service name.

4.   You will need to resolve the TNS connection string obtained in Step 6 of the previous exercise in order to successfully connect to your database. If your client PC can resolve the FQDN hostname in the tnsnames.ora file, then this may be used; otherwise, use the public IP address for the database node used for your SSH connection as the hostname. The default port is 1521 and the service name is obtained from the tnsnames.ora file.

5.   Choose Test and look for the Success status on the bottom left of the window.

Images

6.   Save the connection by selecting Save, and launch a connection by choosing Connect. You are now connected to your DBCS database.

dbcli

DBCS nodes on VM and bare metal are preinstalled with the database CLI or dbcli. This is an OCI-specific utility not available with on-premises installations. The dbcli must be run as the root user and is located in the /opt/oracle/dcs/bin directory. The dbcli operates on all Oracle software homes, and reports, lists, configures, and executes many operations, including the following:

•   Creating backups and recovering databases

•   Rotating and managing TDE keys

•   Managing database storage

•   Performing upgrades

•   Configuring Swift object storage

•   Creating, executing, and scheduling jobs

Here is the output of the dbcli describe-system command run on the demodb system created in an earlier exercise.

Images

Logs from dbcli are maintained in the /opt/oracle/dcs/log directory. The dbcli is occasionally updated with new features. It is good practice to periodically update the dbcli, using the cliadm update-dbcli command, and check the progress of the update with the dbcli describe-job --jobid command:

Images

Images

Use dbcli -h for a full list of available commands.

DBCS Backups

DBCS backups may be unmanaged or managed. Unmanaged backups are standalone backups taken independently using either RMAN or dbcli. DBCS provides several managed options for backing up databases. Exadata DB backups are taken in a different manner to VM and bare metal DB systems and are discussed later. DBCS backups are encrypted using the key used for Transparent Data Encryption (TDE) in the database. It is important to back up the TDE wallet separately because the backup cannot be restored unless you have the correct TDE key. Although DBCS restore and recovery are not discussed explicitly, these are converse operations and once your backups are configured, the restore and recovery approach must be consistent with the backup approach. For example, if you use dbcli to create a backup, it is best to use dbcli to restore and recover the database or TDE wallet.

Images

EXAM TIP    High-level knowledge of DBCS backup options is measured in the exam. The DBCS backups discussion goes into much greater detail, but it is my belief that understanding, using, and testing backup and recovery are critical for adoption of DBCS.

Unmanaged Database Backup Using dbcli

When a standalone backup is taken with dbcli, you may store the resulting backup in the FRA disk group or in an object storage bucket. The interface to the object storage bucket is an OpenStack Swift object storage bucket. This allows you to store backups in any Swift object store and for on-premises and other Cloud infrastructure to store data in the OCI object store using an open, uniform interface.

The database must be in archivelog mode. When backing up to disk with dbcli, the backups are stored by default in the Fast Recovery Area (FRA) that resides in the RECO disk group, but you may configure your disk backups to point to other locations. The following are the high-level steps for taking backups to disk using dbcli:

1.   Create a backup configuration resource with a disk backup destination.

2.   Associate the backup configuration with the database to be backed up.

3.   Create the backup.

These are the high-level steps required to back up a DBCS database to object storage using dbcli:

1.   Create an object storage bucket to store backups.

2.   Generate an Auth token using the console or API and retain the password.

3.   Create an OCI user with privileges limited to the bucket.

4.   Create an object store swift using the OCI username, the Auth token password, and the swift endpoint (one per region of the format https://swiftobjectstorage.<REGION NAME>/v1—for example, https://swiftobjectstorage.ca-toronto-1.oraclecloud.com/v1).

5.   Create a backup configuration resource with an object storage backup destination using the object store swift name.

6.   Associate the backup configuration with the database to be backed up.

7.   Create the backup.

Images

CAUTION    When you use dbcli or RMAN to back up a database, a backup configuration is created and associated with the database. When you use the console or the API to back up the database, a different backup configuration is created and associated with the database. This may cause previously configured unmanaged backups to fail.

Exercise 6-5: Make a Disk-Based Database Backup Using dbcli

In this exercise, you configure and take a disk backup of the demodb database created in an earlier exercise. Several less descriptive dbcli output rows and columns, like timestamps, have been removed for brevity.

1.   Obtain details about the database to be backed up using dbcli.

Images

2.   Create a backup configuration using the dbcli create-backupconfig command, providing a backup name, destination type of disk, and a recovery window or backup retention period of 1 day. Using the list-backupconfigs command lists the available backup configurations.

Images

3.   The backup configuration must be associated with a database using the dbcli update-database command, specifying either the database name or database ID and the backup configuration ID.

Images

4.   Associating the backup configuration with the database updates metadata in the database configuration so it is advisable to check on the progress of this association job before taking the backup using the dbcli describe-job command. This handy command can be used for any task created with a job ID.

Images

5.   Once the database update completes successfully, you are ready to run the backup. If you take future backups to disk, you are not required to create new backup configurations or to associate the configuration with your database. If you create a new configuration, say to back up to a object storage or to use the OCI console or API, that new configuration must be associated with the database. Only one configuration may be associated with a database at a time. To initiate a level 0 backup, you may use the dbcli create-backup command, providing the backup type, component to be backed up, database ID, and, optionally, a descriptive backup tag.

Images

6.   Describe the backup job using its job ID to determine the progress of the backup. You may also query the backup status using the get-status-backup command and schedule the backup using the schedule-backup command.

Images

Images

CAUTION    The dbcli utility exposes and relies on many identifiers, including job IDs and database IDs. These should not be confused with Oracle Cloud IDs (OCIDs). Also be mindful that at the time of this writing, backups taken with the dbcli utility do not appear with the listings of backups taken using the web console.

Exercise 6-6: Back Up Your TDE Wallet to Object Storage Using dbcli

In this exercise, you create an object storage bucket for DB backup files. To connect dbcli on your DBCS node to the object storage and to also allow non-DBCS instances to place files in an object storage bucket, an objectstoreswift resource is created and used to set up a backup configuration. The backup configuration is associated with a DB instance and then any type of backup (DB or TDE wallet) may be taken and stored in your object storage bucket. Several less descriptive dbcli rows and columns, such as timestamps, have been removed from the command output for brevity.

1.   In the OCI console, navigate to Object Storage | Object Storage and choose Create Bucket. Provide a name like db-backups, choose a standard storage tier, choose your preferred encryption setting, and click Create Bucket.

2.   In order for dbcli on the DBCS compute node to access object storage, it must first authenticate with OCI. An authentication token is created in OCI and credentials provided to dbcli. You may use any OCI account for this exercise, but in your organization, you may want to set up IAM users and limited privileged groups with permissions only on backup-related object storage buckets and use these accounts for backup configurations. To create an authentication token using the console, navigate to Identity | Users and choose a user with privileges on the db-backups bucket.

3.   Choose Auth Tokens from the list of resources and select Generate Token. Provide a description, such as “backups,” and click Generate Token. The token is a string of characters that behaves like a password. Copy the string to a safe location. Once the dialog displaying the token is closed, it will not be shown again.

Images

4.   An objectstoreswift resource must be created by providing the dbcli create-objectstoreswift command with a name, your OCI tenancy, username and password (authentication token), and a Swift endpoint URL. The dbcli list-objectstoreswifts command may be used to list your objectstoreswift resources.

Images

5.   Create a backup configuration using the dbcli create-backupconfig command, providing a backup name, the destination type of object store, and a recovery window or backup retention period of 1 day. Describe the job to view the tasks required to create the object storage backup configuration.

Images

Images

6.   The list-backupconfigs command lists both of the available backup configurations.

Images

7.   The new backup configuration must be associated with a database using the dbcli update-database command, specifying either the database name or database ID and the backup configuration ID.

Images

8.   Once the database update completes successfully, you are ready to run the backup. If you wish to take future backups to disk, you need only update the database to use the disk-based configuration. To create a backup of the TDE wallet, use the dbcli create-backup command, providing the backup type, component to be backed up, database ID, and, optionally, a descriptive backup tag.

Images

9.   You may encounter a DCS-10045 validation error, which is usually accompanied by a more descriptive message. Several iterations of testing have yielded inconsistent results. All testing results in successful backups, most completed without any validation errors. It may be that there is a delay in finalizing the backup configuration, which results in a validation error when the backup is initiated.

10.   Using the console, navigate to Object Storage | Object Storage | db-backups and confirm that your backup has been written to the correct bucket.

Images

Unmanaged Database Backup Using RMAN

Using Oracle Recovery Manager (RMAN) for taking database backups is probably familiar to most DBAs. RMAN is widely used and trusted for backup and recovery of Oracle databases. The dbcli backups used in the previous sections make use of RMAN under the wrapper as well.

The database must be in archivelog mode. When backing up to disk with RMAN, just like unmanaged backups with dbcli, by default backups are stored in the Fast Recovery Area (FRA) in the RECO disk group but may be stored in other locations. The following are the high-level steps for taking backups to disk using RMAN:

1.   Set RMAN configuration parameters.

2.   Create the backup with RMAN.

These are the high-level steps required to back up a DBCS database to object storage using RMAN:

1.   Create an object storage bucket to store backups.

2.   Generate an Auth token using the console or API, and retain the password.

3.   Create an OCI user with privileges limited to the bucket.

4.   Install the backup module opc_install.jar file. This step is similar to creating an object store swift resource as done previously for use with dbcli. In fact, these steps do the same thing, and the previous configuration done for dbcli backups to object storage is picked up automatically.

5.   When installing the backup module for the first time, you must provide an OCI username and the Auth token password and the Swift endpoint (one per region of the format https://swiftobjectstorage.<REGION NAME>/v1—for example, https://swiftobjectstorage.ca-toronto-1.oraclecloud.com/v1), as well as the object storage container, an encryption wallet directory, the location of the SBT library, and an initialization parameter file used by RMAN.

6.   Configure RMAN.

7.   Create the backup.

When a new DBCS system has been deployed, you may SSH to a DB node, sudo to oracle, and review the default RMAN configuration using the RMAN show all command. Several RMAN configuration settings have been removed from the following output.

Images

You may create an RMAN backup to disk as you would have done with your non-DBCS databases using the default disk channel to write backups to the FRA to other file systems.

Images

The next exercise describes the steps to create backups to object storage buckets using RMAN. If you completed the previous exercise, you may create a backup to the same object storage bucket as before with no further configuration changes. Note the RMAN channel used is SBT_TAPE_1, indicating the use of the backup module. Listing the backups shows the media location prefixed with swiftobjectstorage.

Images

Exercise 6-7: Back Up Your Database to Object Storage Using RMAN

In this exercise, you create an object storage bucket for DB backup files, and an authentication token for a relevant OCI user with appropriate permissions to access the object storage bucket.

1.   In the OCI console, navigate to Object Storage | Object Storage, and choose Create Bucket. Provide a name like rman-backups, choose a standard storage tier and your preferred encryption setting, and select Create Bucket.

2.   Create an authentication token using the console, navigate to Identity | Users, and choose a user with privileges on the rman-backups bucket.

3.   Choose Auth Tokens from the list of resources and select Generate Token. Provide a description, such as “backups,” and click Generate Token. The token is a string of characters that behaves like a password. Copy the string to a safe location. Once the dialog displaying the token is closed, it will not be shown again.

4.   Install the backup module. SSH to the DB node, sudo to the oracle user, and navigate to the directory that contains the backup module opc_install.jar file. To back up on-premises databases to object storage, this module must be downloaded separately. The backup module requires several parameters, including an OCI username and password (token), the object storage bucket name (container), pre-created directories for the Oracle wallet, the backup module library, the RMAN config file, and the Swift endpoint URL for your region. In this exercise, the /u01/app/oracle/admin/rman/wallets and the /u01/app/oracle/admin/rman/config directories were pre-created to store the Oracle wallet, RMAN library, and configuration files.

Images

5.   Connect to RMAN after setting the environment variables for your DB instance, and set several RMAN configuration defaults. Setting the default device type to channel SBT_TAPE associated with the backup module library allows future RMAN backups for this database to be taken to the object storage by default. Backups stored in the Cloud must be encrypted so database backup encryption must be enabled and an encryption password must be set (it is set to “password” in this example for illustrative purposes, but a strong password is recommended).

Images

6.   Create a backup of the system datafile. Notice that the control file and spfile are automatically backed up to object storage as well each time an RMAN backup completes.

Images

7.   Using the console, navigate to Object Storage | Object Storage | rman-backups and confirm that your backup has been written to the correct bucket.

Images

EXAM TIP    DBCS backups may be stored in both local and object storage. Local backups use FRA space and are fast and optimized, but durability is low because the backup is not available if the DB system is down. Storing backups in object storage is recommended for high durability, availability, and performance. An ideal solution may be a hybrid approach where backups are kept locally for fast point-in-time recovery and a backup copy also resides in object storage.

Managed Database Backup Using the Console

The OCI console provides a powerful interface for creating full backups on demand, enabling automatic backups. Both automatic incremental and standalone backups created through the console are known as managed backups and may be listed, deleted, and used to restore, recover, and even create new database systems from backups with a few clicks.

When you create a DBCS system, you may enable automatic backups, which have the following convenient characteristics.

•   Daily incremental backups are taken to object storage within a backup window.

•   The backup window is between midnight and 6:00 AM, in the time zone of the DB system’s region or UTC for older DBCS systems created before November 21, 2018.

•   A level zero backup is initially created, followed by daily incremental level 1 backups until the weekend, when the cycle repeats, starting with a fresh level zero backup.

•   Automatic backups are retained in object storage for 30 days, unless the database is terminated, which results in the automatic incremental backups being removed as well.

You may create on-demand backups at any time, which are retained until they are explicitly deleted.

Exercise 6-8: Create a Standalone Managed Backup Using the Console

In this exercise, you create a full standalone backup using the console.

1.   Sign in to the OCI console and choose your compartment.

2.   Navigate to Bare Metal, VM, and Exadata and choose one of your databases. In the databases section for that database, hover on the ellipses and choose Create Backup.

Images

3.   Provide a descriptive name for the backup and choose Create Backup. In this example, a full standalone backup is being taken prior to applying a patch. This is a typical DBA activity. The backup runtime depends on the size of the database.

Images

4.   Navigate to Bare Metal, VM, and Exadata | Standalone Backups and view the list of explicit full backups. If you hover on the ellipses adjacent to the standalone backup, you may either delete the backup or create a new database system using this backup. If your current DBCS system is a VM, then the new DB must reside on a new system. If your current DBCS system is bare metal, you may choose to create another database on the bare metal system based on this backup.

Images

Exercise 6-9: Enable Automatic Incremental Backups Using the Console

In this exercise, you enable automatic backups for an existing database using the console.

1.   Sign in to the OCI console and choose your compartment.

2.   Navigate to Bare Metal, VM, and Exadata | DB Systems | DB System Details | Database, and choose Enable Automatic Backup.

Images

3.   You should receive a prompt when enabling automatic backups reminding you that a new backup configuration will be created and that any unmanaged backups using other backup configurations may no longer be reliable. Choose OK to confirm and enable automatic backups.

Images

4.   Navigate to Bare Metal, VM, and Exadata | DB Systems | DB System Details | Database | Backups to see the new automatic backups being created.

Database recovery using managed backups is performed by navigating to the database to be repaired through the console—Bare Metal, VM, and Exadata | DB Systems | DB System Details | Database—and choosing Restore. Figure 6-7 shows three options available when restoring a database through the console.

Images


Figure 6-7   Restore database using managed backups.

You may restore to the latest backup, minimizing data loss, or perform point-in-time recovery (PITR) to a previous timestamp or to a particular system change number (SCN). Explaining system change numbers is beyond the scope of this discussion. Simply stated, an SCN functions like an internal clock marking the passage of time in a database. There are many ways of correlating SCNs with available backups, including creating an RMAN backup report.

Creating an RMAN backup report with dbcli is straightforward. Connect to a database node, switch user to root, and run the dbcli create-rmanbackupreport command to create the report and then the dbcli describe-rmanbackupreport command to identify the log file location. Viewing the file shows various RMAN commands used to report on the backups.

Images

Exadata Database Backups

DBCS backups on Exadata are conceptually identical to DBCS backups on VMs and bare metal. Both local FRA and object storage are also available for backups. Backup configurations and Swift object storage setup are required as well. So, what is different? Essentially, a different set of tools and some additional infrastructure setup are required to configure DBCS backup on Exadata. Remember that an Exadata system is associated with several networks. There is a private infiniband network as well as a management network between all compute nodes and storage cells. There is a public client–facing subnet and a private backup subnet used for backup-related traffic. The infrastructure prerequisites include the following:

•   A service gateway is required for the VCN where the Exadata VMs reside for access to object storage. Apart from storing backups, object storage access is used for patch and tooling updates.

•   A static route must be added to each Exadata compute node to direct traffic over the backup network to object storage to reduce network traffic on the client data network.

•   An egress security list rule may be required to allow traffic to the object storage service from the backup subnet.

•   An object storage bucket is used to store backups.

•   An OCI user and associated Auth token. I recommended that the user belong to a group with limited privileges to access backup-related object storage buckets.

Once the infrastructure prerequisites are in place, a backup configuration file is created that specifies the backup destination, schedule, and retention period. The backup configuration file is created by the root user connected to the first compute node in the Exadata DB system. The node may be identified by using the olsnodes -n command while connected as the grid user. The backup configuration file, named exadb1_backup.conf for example, should be placed in the directory /var/opt/oracle/ocde/assistants/bkup/ and is used to schedule backups to local storage as well as to a pre-created object storage bucket. The bkup command, also located in directory /var/opt/oracle/ocde/assistants/bkup/, is used to associate the configuration file with a database and create a crontab entry to schedule the backup:

Images

The default backup configuration adheres to Oracle DBCS backup best practices. A weekly full backup (level 0) is accompanied by daily incremental (level 1) backups. All backups to cloud storage are encrypted. Backups are retained based on the destination:

•   Local FRA storage only Backups are retained for seven days.

•   Object storage only Backups are retained for 30 days.

•   Both local and object storage The most recent backups are retained locally for seven days and on object storage for 30 days.

DBCS on Exadata provides the bkup_api tool for management of backups. Located in the /var/opt/oracle/bkup_api/ directory, the bkup_api tool provides access to the backup API and includes options to query the status of an Exadata scheduled backup, list available backups, delete local backups, initiate an on-demand backup, and to create long-term backups. The backup configuration file is associated with a database, which may be backed up accordingly

DBCS Patching

Relevant patches for your DBCS system are automatically discovered and listed in the OCI console. If a new patch is released, previous patches are still available through the console. You are not forced to apply the latest available patch, but you cannot downgrade to a previous patch. You may pre-check a patch before applying to identify any potential downstream issues.

The two categories of patching are DB system and Databases patches. These correspond to Grid Infrastructure and Database patches. Both must be patched independently. Both have a pre-check option. The sequence of patch application is important because there may be dependencies between them. For example, a specific database patch may only be successfully applied if a dependent DB system patch has already been applied.

Navigate to Bare Metal, VM, and Exadata | DB Systems | DB System Details | Database | Patches to view a list of database patches applicable to your database version. Hovering over the ellipses menu adjacent to a listed patch reveals the patch Pre-check and Apply options. Figure 6-8 shows a DB system patch that has been successfully prechecked and applied. During a suitable maintenance window, a backup is typically taken before applying a patch.

Images


Figure 6-8   Pre-check completed and applying a patch to a DBCS database

When patching single-node database systems, downtime may be incurred while the relevant patch steps execute. RAC databases on VM and Exadata are usually available during patching because most patches are rolling, which means they may be applied in a rolling manner on each node in the cluster.

Advanced Database Features

Cost, security, and availability are key factors to consider when choosing a database system. Although there are tons of advanced database options and features available, this section concentrates on the following topics related to these key factors. Database licensing, data encryption, and two high-availability options, RAC and Data Guard, are discussed within the context of DBCS.

Database Licensing

Oracle database software license costs are non-trivial. In general, the database software license is often significantly more expensive than the infrastructure required to host a database system. There are currently two metrics used for licensing database software: named user plus (NUP) and processor-based. A named user is essentially an individual or device that connects to the database. There are many legacy licensing metrics, which are not discussed here.

Processor-Based License Metric for Oracle Database

Understanding processors in the Oracle license landscape can be confusing, but here are several facts to try to distinguish the Oracle processor license metric among terms like CPUs, sockets, cores, threads, OCPUs, and database editions.

•   Modern servers have one or more CPU sockets on their motherboards.

•   Modern CPUs have multiple physical cores.

•   One physical core may have hyperthreading enabled. On Intel Xeon cores, it is common for hyperthreading to be enabled, providing two threads per core.

•   Oracle license rules specify a table of core processor licensing factors depending on the underlying hardware architecture. For x86-based systems (all OCI servers), there is a core processor licensing factor, often called a core multiplier of 0.5. For several non-Oracle hardware vendors, the core multiplier is 1.

•   The processor licenses required is the product of the number of cores, the number of threads per core, and the core processor licensing factor. A server with two sockets occupied by four-core Intel Xeon CPUs with hyperthreading enabled requires eight (2 sockets × 4 cores × 2 threads × 0.5 core factor) processor licenses.

•   An OCPU is an Oracle Compute Processing Unit equivalent to 1 hyperthreaded core of an x86 CPU. OCI guarantees that both threads are dedicated to your compute shape and not shared among other VMs that may share the server.

•   Oracle Standard Edition (SE2) may only be licensed on servers with a maximum of two sockets. Two-node RAC databases on SE2 are allowed on-premises but each node must have at most one socket. Additionally, SE2 RAC database instances may use a maximum of sixteen threads per instance.

•   A vCPU, or virtual CPU, is a measure used by several other Cloud providers, including AWS and Azure. On AWS and Azure VMs, vCPUs refer to threads that are not guaranteed to be dedicated to your VM. This implies that in the best-case scenario, where no threads are shared at that moment, then AWS and Azure VMs with two vCPUs are equivalent to one OCPU on an OCI VM.

•   If hyperthreading is enabled on AWS (RDS and EC2) and Azure, for non–Standard Edition software, then two vCPUs (two threads) require one processor license. If hyperthreading is not enabled, the one vCPU requires one processor license.

Compute infrastructure varies from standard VM shapes with one OCPU to bare metal and Exadata servers with tens to hundreds of OCPUs. OCPUs are important because they are directly linked to the metrics used for licensing Oracle database software. Table 6-4 shows a few OCI shapes as well as several non-OCI servers and the processor licenses required. Note that the core factors for processors on non-OCI Cloud infrastructure are subject to change. The processor licenses required value is calculated using this formula:

Images

Table 6-4 Processor Licenses Required by Various Cloud Shapes and On-Premises Servers

Processor Licenses Required = Sockets × Cores × Core factor × Threads

Named User Plus (NUP) License Metrics for Oracle Database

If you have a relatively low number of named database users, it may be more cost-effective to license the database software using the NUP metric. NUP licenses are subject to certain minimums per processor and are also dependent on the database edition being licensed. Here are several NUP-related facts to consider.

•   An EE license requires the greater of the total number of users, or a minimum of 25 NUPs per processor.

•   An SE2 license requires the greater of the total number of users, or a minimum of 10 NUPs per processor.

For example, if you want to license EE on the BM.Standard.2.52 OCI shape, as shown in Table 6-4, 52 processor licenses are required. If you have 1,000 named users using the databases on this server and this number is not going to change much in the future, you may benefit from licensing using the NUP metric. As there are 52 licensable processors and you need to license a minimum of 25 named users per processor, you will need 25 × 52 = 1,300 NUP licenses. Because 1,300 is greater than 1,000, you will need to purchase 1,300 NUP licenses. It is then up to you to determine if 1,300 NUP licenses is more affordable than 52 processor licenses.

DBCS Licensing

DBCS systems on OCI comprise network, storage, and compute infrastructure as well as Oracle database software. Oracle offers two models for licensing a database system on DBCS and also offers Pay As You Go and Monthly Flex pricing options.

•   Bring your own license (BYOL)   This model includes the infrastructure cost but not the database software license costs. This model is suitable for migrating existing on-premises systems to OCI and allows the pre-existing license investment to be reused. Only like-for-like licenses may be used. If you have four Enterprise Edition (EE) processor-based licenses on-premises, you may use these for EE databases on DBCS.

•   License included   This model includes both the infrastructure cost and the database software license costs. Using the online pricing calculator, you can easily estimate the cost of DBCS with included licenses.

As discussed earlier in this chapter, OCI offers several additional database software editions, including EE-HP and EE-EP, that do not exist as on-premises offerings. EE licenses cannot be used for EE-HP and EE-EP database editions.

For database systems on VMs, you are charged for each OCPU hour that the system uses, rounded up to the nearest hour. If your DBCS on VM system is stopped, you do not pay for OCPU, just for storage and any other infrastructure being used.

For database systems on bare metal, you are charged hourly for the hosted environment regardless of whether it is running or not. When scaling up by adding additional OCPUs (in multiples of two, limited to a maximum of eight OCPUs for DB SE), you are charged for each OCPU using a per-hour metric, rounded up to the nearest hour.

Pricing for DBCS on Exadata works in the same way as bare metal except there is an additional upfront cost, which varies depending on the shape of the Exadata system chosen.

Data Encryption

Security is baked into the OCI platform and DBCS is no exception. Transparent data encryption (TDE) on premises is an additionally licensed component of the Advanced Security Option available for Oracle DB Enterprise Edition. All DBCS database software editions, including Standard Edition, are available with TDE.

TDE offers encryption at several layers of the database stack. Within the database, user tablespaces as well as temporary and UNDO tablespaces, tables, and columns may be encrypted. All data stored in encrypted tablespaces is automatically encrypted. Backups and Data Pump exports may be encrypted before being transmitted over networks. Standby databases built with Data Guard work seamlessly with encrypted datafiles. Basically, TDE works transparently with other complementary database options and features.

When you use TDE, there is a low performance impact, as additional CPU cycles are required to encrypt and decrypt data. Hardware-based encryption and decryption acceleration is available on Exadata and many new CPUs, including Intel Xeon CPUs widely used in the OCI infrastructure, reducing the performance impact.

TDE relies on master encryption keys that are stored by default in an Oracle PKCS12 file-based keystore known as an Oracle wallet. You may optionally store your TDE master encryption keys in Oracle OCI or Key Vault or an external key management system (KMS).

When a DBCS system is created, a database administrator password is required. This password initially serves as the password for the TDE wallet, which, of course, should be changed periodically. All user tablespaces are encrypted by default using TDE. The TDE master encryption keys should be rotated periodically. You may use the dbcli update-tdekey command to rotate keys for your databases.

Images

CAUTION    The same TDE encryption key used to encrypt an RMAN backup or Data Pump export must be used for decryption. Before rotating keys, ensure that you back up your TDE wallet and that you understand the implications for previously encrypted backups and exports. Resist the temptation to keep the TDE wallet backup with your database backup so you have it on-hand if you need to restore in an emergency, as this defeats the point of using TDE. Keep the lock separate from the key. Ensure that you back up the TDE wallet and tag the backup appropriately.

High Availability

While Oracle databases are renowned for their reliability, performance, and scalability, they do, however, run on physical infrastructure that may occasionally fail. Infrastructure failure scenarios may be grouped into three broad categories: node failure, rack failure, and data center failure.

A node failure occurs when infrastructure internal to a server or compute node malfunctions, leading to a node being rebooted or shut down in an unplanned manner. Examples of root causes of node failures include faulty boot devices, bad memory modules, and power supply interruption. Rack failure occurs when an outage affects all servers in a specific rack or cage. This is uncommon but may occur when there is loss of power to a particular rack or overheating in the cage prompts an emergency unplanned shutdown of all servers in the rack. A data center comprises many server racks, and when there is a data center outage, it is usually associated with a catastrophic disaster like a flood or earthquake that has damaged or severed power or network access to an entire data center.

Fortunately, OCI has been architected with high availability (HA) as one of its core design objectives, and many mitigating strategies have been implemented to support HA during a failure event. All OCI servers have many redundant components that tolerate failure of a power supply, network interface, and disk drives. OCI VMs may be created in separated infrastructure known as fault domains within a data center or availability domain to tolerate rack failure. There are also many regions with multiple ADs and strong high-speed networks between ADs and regions. These mitigating strategies lend themselves to two powerful software-based HA solutions, RAC and Data Guard.

RAC

To understand RAC, first a quick refresher for non-Oracle DBAs about the Oracle Server. An Oracle Server is composed of a database instance and a set of database files. A database instance is a collection of memory structures and operating system background processes. When an Oracle Server is shut down, the database files are closed and the instance is stopped. The database files persist. They occupy disk space, and they contain the tables and rows of data that will be accessed the next time the database files are opened. When the instance stops, its memory structures are deallocated and all its background processes are stopped. The compute power (CPU and memory resources mainly) used by the instance are released and may be used by other programs. When the instance starts on a node, memory structures are created, background processes are started, and the instance attempts to mount the database files and open the database.

A Real Application Cluster (RAC) allows multiple instances, each running on its own compute nodes, to simultaneously mount and open the same set of database files that are stored on shared storage. RAC depends on Oracle Grid Infrastructure (GI) software to coordinate the communication between the RAC nodes and the shared storage.

OCI makes RAC available on DBCS through a two-node VM cluster and on Exadata. RAC databases can tolerate the complete loss of one compute node in the cluster with no interruption to the database availability. RAC also supports HA during patching and node maintenance exercises. When setting up RAC on a two-node VM, ensure that each RAC node resides in a separate fault domain.

When GI is configured, a set of listeners, known as scan listeners, are configured to provide highly available database client network traffic management services. Scan listeners route incoming connections to available local listeners based on algorithms that consider the load profile of specific instances to ensure even load distribution. In the event of a RAC node failure, the scan listeners direct traffic to the operational node.

RAC on OCI is available only with EE-EP edition. To create a RAC system on DBCS using the console, navigate to Bare Metal, VM, and Exadata, and choose Launch DB System. Figure 6-9 shows the familiar DB System Information dialog screen.

Images


Figure 6-9   Create a two-node RAC database on VM.

Provide a display name, choose an AD, and choose a VM or Exadata shape. The VM shape must have at least two OCPUs. Choose two nodes to enable RAC, and the Oracle database software edition will default to EE-EP. Choose available storage to be allocated to your database. Both RAC nodes will access the same shared storage to be allocated. Provide an optional cluster name. As shown in Figure 6-10, ensure that at least two fault domains are chosen.

Images


Figure 6-10   Choose two fault domains for resilience of RAC node VMs to node failure.

The remaining options have been discussed previously, so supply the required information and choose Launch DB System. After a short while, incredibly, a fully functional highly available RAC database system is provisioned and ready for business.

Exadata has been engineered with RAC and HA in mind as discussed earlier. Notably, to facilitate high-speed cluster internode communications, a high-speed, redundant infiniband network fabric is implemented.

Data Guard

While RAC mitigates against node failure and ensures HA, Data Guard mitigates against node, shared storage, and even AD failure in multi-AD regions. Data Guard is a mature replication technology available with all Oracle Enterprise Edition versions. All Data Guard configurations consist of a primary database and at least one standby database. Each system is a fully operational Oracle server with nodes, instances, and independent sets of database files. The primary and standby systems are almost exclusively on separate infrastructure to provide business continuity in case there is a failure of the primary system. Two modes of Data Guard replication may be configured:

•   Physical standby   This is the most pervasive form of Data Guard. Primary and standby databases are block-for-block physical copies of one another. Damaged database blocks on the primary may be repaired using blocks from the standby. RMAN database backups taken on the standby may be used to restore and recover the primary database as they are interchangeable. Changes in the primary are captured through the redolog mechanism and shipped to the standby. Physical standby databases are typically opened in mount mode while changes (in the form of redolog entries) are applied through a process known as managed recovery. Basically, the standby database is in a constant state of being recovered using the latest changes being made to the primary (“perpetual recovery”). Physical standbys support many advanced features including Active Data Guard (where the standby is open in read-only mode) and snapshot standby databases.

•   Logical standby or SQL Apply   The less popular, seriously understated little sibling of physical Data Guard. Primary and logical standby databases start off being physically identical. Instead of block replication by applying the redo stream directly onto the standby database, the redo is mined, and SQL transactions are extracted and applied using SQL Apply, a process that runs SQL changes in the correct sequence to maintain primary data integrity between the primary and standby. Logical replication replicates only a subset of the databases that includes user-specified schemas and objects. The logical standby can be opened read-write. Indexes, which do not exist on the primary, can be created on the logical standby, and writes are tolerated as long as primary data is not impacted.

Each database in a Data Guard architecture is designated a role. One database occupies the primary role while one or more databases occupy the standby role. Logical standby databases are not discussed further and the rest of this section is pertinent to physical standby databases. These are most pervasive and offered out-of-the-box with all the automation prebuilt on DBCS.

Data Guard replication may be configured in one of three modes, which determines how the redo stream of changes from the primary database is shipped and applied on the standby:

•   Maximum Performance   Optimized for performance. Replication is asynchronous and the primary database does not wait for confirmation that the captured redo stream has been successfully shipped or applied on the standby. Replication interruptions or issues do not impact the primary database.

•   Maximum Availability   Optimized for availability and protection. Under normal operation, changes are not committed on the primary database until all changes are written to the redo logs and confirmation that the captured redo stream has been successfully shipped or applied on the standby has been received. Issues with the replication cause the system to operate as if in maximum performance mode and the availability of the primary database is not impacted.

•   Maximum Protection   Optimized for data protection and integrity of the standby. No data loss is possible. Replication is synchronous and the primary database waits for confirmation that the captured redo stream has been successfully shipped or applied on the standby. Replication interruptions or issues cause the primary database to shut down.

A role transition occurs when one of the standby databases assumes the primary role. The following are two types of role transitions:

•   Switchover   A planned role transition often to perform OS patching or hardware maintenance on the primary. No data loss is incurred during a switchover because the primary and standby databases are completely synchronized when the roles are transitioned. There is a small window (a few minutes at most, usually less than a minute) of unavailability during a switchover. The “old” standby database is now the primary (open read-write) while the “old” primary acts in a standby role. Managed recovery is paused, while maintenance is performed without interruption to the business. This provides HA in a planned manner.

•   Failover   This is an unplanned role transition and, as the name suggests, a failure has occurred. One of the standby databases is designated as the new primary. Point-in-time recovery (PITR) occurs. Depending on the Data Guard configuration discussed earlier, minimal (max performance or max availability) or no data loss (max protection) may occur during failover. Max protection mode ensures that any last in-flight transactions are only committed on the primary if they are committed on the standby, so no data loss occurs.

Ideally, each database system resides in separate availability domains in a multi-AD region, but at the very least each system should reside in separate fault domains.

In a two-system Data Guard configuration, one is designated the primary role while the other takes on the standby role. Applications and users connect to the primary database and, as changes are made on the primary database, they are captured and shipped over the network to the standby database, where they are applied. This is a classical physical standby configuration.

DBCS automates the configuration of Data Guard. If you choose any EE database, you can easily enable Data Guard by choosing Enable Data Guard from the Databases options menu, as shown in Figure 6-11. This prompts you for a set of responses for the creation of the standby database system that are almost identical to those provided when the primary database was created.

Images


Figure 6-11   Enable Data Guard for a DBCS system.

Several important considerations for Data Guard systems include setting up a regional subnet in multi-AD regions. This simplifies networking configuration, and primary and standby database systems can be placed in different ADs. Data Guard also provides a monitoring agent known as an observer that ideally runs on different infrastructure from both the primary and secondary DB systems. The observer can be configured to determine when a failover is required and to automatically perform the failover. This provides a huge HA advantage as no human involvement is required.

Enabling Data Guard with DBCS masks the configuration complexity required in a manual Data Guard setup. Here, OCI takes care of creating the new system, installing the same binaries, duplicating the database and configuring the broker. Note that both the primary and standby database systems must be on the same VCN. For more advanced Data Guard scenarios such as having a third standby database or spanning multiple VCNs, Data Guard must be set up manually. In this case, the DBA has to do all of the setup work for provisioning the infrastructure and database and for configuring Data Guard.

Data Guard may be enhanced by licensing the additional Active Data Guard (ADG) option. ADG allows standby databases to be opened in read-only mode, supporting the offloading of expensive queries and read-intensive activities such as exports and backups to the standby. ADG in 19c and later supports limited read-write operations on the standby database, which opens up some exciting HA possibilities.

Images

NOTE    Consider the us-ashburn-1 region, which has at least three ADs. A best-practice HA architecture may include a primary RAC database system in AD1 with each RAC node in a separate fault domain, with a standby RAC database system in AD2, again with both standby nodes in separate fault domains. Active Data Guard has been implemented for backup and query offloading. A Data Guard observer has been configured in AD3 to orchestrate a fast-start-fail-over (FSFO) if issues with the primary RAC database system are detected.

Autonomous Databases

Oracle autonomous database (ADB) systems offer a hosted and managed option with an underlying Exadata service and the ability to dynamically scale up and scale down both the CPUs and storage allocated to your VM. This single feature unlocks a great number of possibilities, chief among them the game-changing idea of sizing your environment for average workload, scaling up during peak periods, and scaling back down once the workload normalizes. ADB is a pluggable database and is available on a shared or dedicated Exadata infrastructure.

Relying on decades of internal automation, ADB uses advanced machine learning algorithms to balance performance and availability with cost, automating many tasks including indexing, tuning, upgrading, patching, and backing up the database. HA is achieved through the use of a RAC database (when scaling to more than 16 OCPUs), triple-mirrored disk groups, redundant compute and network infrastructure, and nightly backups.

The following are two autonomous database variants:

•   Autonomous Transaction Processing   The ATP workload type targets OLTP databases, and configuration parameters are biased toward high-volume random data access typical of OLTP systems. ATP databases are also suitable for mixed workloads, including some batch processing reporting, IoT, and machine learning, as well as transaction processing.

•   Autonomous Data Warehouse   The ADW workload type targets analytic systems including data warehouses and marts, data lakes, and large machine learning data databases with configuration parameters biased toward high-volume ordered data scanning operations.

Autonomous Transaction Processing (ATP) and Autonomous Data Warehousing (ADW) each support a different workload type but they share the underlying infrastructure and tooling. Essentially, they differ in database initialization parameters and automation options. ADW stores data in a columnar format while ADB uses a traditional row store. The following sections discuss the autonomous database service, which encompasses the ATP and ADW cloud services.

Create an Autonomous Database

While the CLI and API may be used for creating ADBs, the console is used to exemplify the underlying principle of autonomous databases: It should be easy for anyone to provision a secure, powerful database system that is largely self-managing.

When you create an ADB, you choose the initial CPU cores and storage allocation. At the time of this writing, the minimums are one OCPU and 1TB of storage charged on an hourly basis. There is also a free-tier single serverless ADB with one OCPU and 0.2TB of storage available to each tenancy. Be mindful of the licensing model you intend to use. The more CPU cores you choose, the more processor licenses you will use. If you choose the BYOL model, allocate your licenses accordingly. If you subscribe to new database software licenses at database create time, be prepared to be charged accordingly. Regardless of the licensing model you choose, you are implicitly charged for the DBCS infrastructure and storage allocated and used by your ADB.

You cannot choose the Oracle software version used in your ADB. As of this writing, ATP and ADW databases are provisioned with Oracle database version 18c with the 19c release planned for general availability in due course. While you can choose your ADB region and compartment, other options are limited. For example, you do not have the option to specify the AD or VCN in which to provision these databases and all ADBs are exposed to the Internet.

Exercise 6-10: Create an ATP Database Using the Console

In this exercise, you create an ATP database using the OCI console. If you have not provisioned the free tier ADB in your tenancy, this may be a good time to try it.

1.   Sign in to the OCI console and choose your compartment.

2.   Navigate to Autonomous Transaction Processing and choose Create Autonomous Database. Choose an ATP workload type, a compartment, a display name like ATP Demo, and a database name. Select an initial CPU core count and an initial amount of storage to allocate.

Images

3.   Provide a password for the administrator account. This is probably the toughest part as the password must conform to strict password rules. Select a license type, optionally specify tags, and choose Create Autonomous Database.

Images

4.   Navigate to Autonomous Transaction Processing | ATP Demo and review your newly provisioned database after a few minutes.

Images

Connecting to an Autonomous Database

Autonomous databases are hosted on OCI infrastructure that is not accessible to you. You have access to the database through a SQL*Net, OCI, or JDBC thin connection through the default listener port 1521. You cannot SSH to the compute node and no operating system access is permitted. Oracle Cloud Operations maintains the infrastructure. There are two connectivity options, through the public Internet or through FastConnect with public or private peering (or both), which provides private connections from on-premises networks. Databases in your VCN in private subnets can use service gateways to access ADB instances in the same region without routing data through the public Internet. ADBs maintain access control lists (ACLs) that may be used to “whitelist” IP addresses, preventing access to those IP addresses not on the list.

Your ADB is highly secure, encrypting all data. When connecting to your ADB, certificate-based authentication and Secure Sockets Layer (SSL) are utilized, allowing only authenticated connections to access the database. When the ADB is created, an encrypted key is stored in a wallet on the server. This key is also required on each client that needs to connect to the ADB. Clients using Oracle Call Interface, including ODBC and JDBC OCI as well as “thin” JDBC, use credential wallets to connect to the ADB.

A new privileged user called ADMIN is the default user created. When creating an ADB in the previous exercise, you provided a password for the ADMIN user. Once you connect to the ADB as the ADMIN user, you may create other users. Once the ADB has been provisioned, you may download the wallet credentials file, either through the console or using the API.

Autonomous databases make use of a database resource manager, a feature embedded in the database, which prioritizes access to database resources by defining a resource manager plan that allocates resources to resource consumer groups. Up to five consumer groups are pre-created for each ADB. Five database connection services are created that map to these consumer groups using the format <ADB NAME>_<CONSUMER GROUP>. Two OLTP-specific services, TPURGENT and TP, are available for ATP only. Users must choose the appropriate database service based on their performance and concurrency requirements. The administrator manages available resources for users by assigning connection options to each user. The five consumer groups and, consequently, associated application connection services may be described as follows:

•   TPURGENT   Highest priority for time-critical transaction processing operations, supporting manual parallelism on ATP databases.

•   TP   Typical for OLTP, non-parallel operations on ATP databases.

•   HIGH   High priority for batch and reporting parallel operations that are subject to queuing.

•   MEDIUM   Typical for batch and reporting parallel operations that are subject to queuing.

•   LOW   Lowest priority for non-parallel batch and reporting operations.

ADB resources are broadly allocated to these services to control parallelism and concurrency as well as priority using resource manager plan shares that define the proportion of the CPU and IO resources allocated to each consumer group, as shown in Table 6-5.

Images

Table 6-5 ADB Services Priority, Parallelism, and Concurrency Allocation

You may update the resource manager plan shares using the service console.

Exercise 6-11: Connect to an ATP Database Using the SQL Developer

In this exercise, you connect to the ATP database created in the previous exercise using SQL Developer.

1.   Sign in to the OCI console and navigate to Autonomous Transaction Processing | ATP, and choose DB Connection. Notice the five Connection Strings that may be used for your connections.

Images

2.   Choose Download, provide a password, and save the zip file that contains several files including the client wallet in a secure location.

Images

3.   In SQL Developer, create a new database connection. Provide a connection name, the ADMIN username and password you provided when creating the ATP database. Choose Cloud Wallet as the connection type, specifying the zip file (still compressed) as the configuration file. Notice the five service options you can select.

Images

4.   Test and Save the connection before connecting to your ATP database.

Back Up and Recover an Autonomous Database

Autonomous databases are backed up automatically at night as encrypted backups using the database backup cloud service. Backups are retained for 60 days before being removed. You may create manual backups that are stored in an object storage bucket in your tenancy. The following high-level steps are required to create manual ADB backups:

1.   Create an object storage, standard tier bucket called backup_<ADB Name>.

2.   Create an auth token for your OCI tenancy.

3.   Connect to the database using an Oracle DB client as a privileged user, and create a credential in the database to connect to your object storage account using the DBMS_CLOUD.CREATE_CREDENTIAL package with your tenancy credential and auth token. Set the default credential property to the newly created credential.

4.   Set the default bucket property to your object storage tenancy URL using the format: https://swiftobjectstorage.<region>.oraclecloud.com/v1/<object_storage_namespace>.

Once these steps are done, manual backups may be taken through the console or through the API.

ADB restore and recovery may be initiated using the API or through the console by navigating to Autonomous Database | <ADB> | Actions | Restore. You may specify an exact timestamp for point in time recovery (PITR) to restore the database to a previous point in time (if there are sufficient backups), or you may search your backups and choose a specific backupset to restore and recover.

ADB clones may be initiated using the API or through the console by navigating to Autonomous Database | <ADB> | Actions | Create Clone. You can make a full clone or a metadata clone that creates a new database with all source database schema metadata but with no source data. This is very useful for test and development scenarios.

Operating an Autonomous Database

ADB operations have many unique features. Several operational topics related to CPU and storage scaling, tuning, patching, and monitoring are discussed in this section.

ADB CPU and Storage Scaling

Sizing database servers for peak load is no longer required with autonomous databases. With APIs or the console, it is a trivial matter to scale up both your CPU count and storage allocation. The ADB in Figure 6-12 may be significantly updated to 128 CPUs from 1 CPU and to 128TB of storage from 1TB. This is an extreme example to demonstrate the sheer power of this feature.

Images


Figure 6-12   CPU core and storage scaling with ADB

You have an ace up your sleeve with this feature, for the inevitable 2:00 AM pager alert, indicating the system is overloaded while processing a business-critical workload. Preauthorizing scale-up parameters with your management can help you literally brute force your way out of a crisis. Perhaps a better way to pitch this is to consider the case of many organizations that run overprovisioned servers in anticipation of the busy periods, such as Black Friday for retailers or student registration week for universities. These databases may now run with just the CPU cores (and licenses) they require and burst during busy times and scale down during quiet times.

ADB Tuning Considerations

Many traditional database tuning tasks have been automated in ADB. The following list is presented as a subset of topics traditional DBAs may need to be aware of as they move to ADB.

•   Optimizer statistics are gathered automatically during direct-path loading operations. However, manual gathering of statistics is still possible, if required.

•   Optimizer hints and parallel hints in SQL statements are disabled by default when you migrate to ADB, but these may be explicitly re-enabled if required.

•   Schema tuning is highly automated with regular enhancements and improvements being released. Tables are automatically partitioned, compressed, configured for in-memory, and tablespaces are automatically created as required. However, manual schema tuning is still possible but should only be used with careful consideration.

•   Automatic indexing simplifies tuning significantly and is based on an expert system that automatically generates and validates candidate indexes before implementing them.

•   All tuning is auditable, so you can keep track of tuning changes being automatically implemented in the database.

ADB Patching and Security

Oracle Cloud Operations support and maintain the Exadata systems that underlie autonomous databases in each region. The Oracle Cloud Operations support team are responsible for patching, which occurs as a quarterly full stack rolling patch of the equipment firmware, operating systems, VM, clusterware, and database using a pretested gold image. As of this writing, you are informed about upcoming patches but you cannot influence the patching schedule.

There is a strict isolation so your data is only accessed by authorized users. Unified Audit is enabled, which tracks login failures, user modifications, and database structural changes. All data is encrypted at rest and with SSL over the wire. In exceptional circumstances, you may grant Cloud operations access to your system using a “Break Glass” function.

ADB Administration and Monitoring

The ADB Service Console provides an increasingly complex set of features geared specifically for administering and monitoring your autonomous database. You may monitor CPU and storage utilization as well as current SQL statement activity and metrics, as shown in Figure 6-13.

Images


Figure 6-13   ADB service console for monitoring and administration

The service console supports many features, including the following:

•   Download the client credentials (wallet).

•   Set resource management rules for CPU and IO shares

•   Set run-away criteria for each consumer group to limit query run times and query IO.

•   Download the Oracle instant client, which includes a free set of tools for developing and migrating data to Oracle ADB.

•   Manage Oracle machine learning (ML) users.

Images

EXAM TIP    It is important to understand the differences between the ADB OCI Console and the ADB Service Console. Specifically, you may be asked about what activities are done from the ADB OCI Console versus the ADB Service Console.

Database Migration

Migrating your on-premises databases to the Cloud is a daunting prospect for many organizations. As cloud platforms mature and the concerns related to data security on cloud infrastructure morph into an understanding and appreciation for the implicit implementation of security best practices by the major cloud vendors, more organizations are embracing the shift to hosting their databases in the Cloud. The benefits are numerous, ranging from agile, scalable compute shapes to push automation for backups, patching, building RAC systems, and setting up Data Guard on DBCS to name a few.

Database migration fundamentally involves transporting data from a source system to a target system. Online data transport requires network connectivity between the source and target systems. In some cases, offline data transport between source and target systems may be preferable. In the sections that follow, I outline online and offline data transport options before discussing various migration approaches.

Connectivity

Source databases being migrated to DBCS are often Oracle databases that reside on-premises. Oracle databases hosted by other cloud providers, such as RDS on AWS, and non-Oracle databases may be migrated to DBCS. If you wish to transfer the source database over the network, it is typical to extend your corporate network to include your VCN. There are three network-based options available for migrating your database to DBCS.

•   FastConnect   Offers private network extension with predictable latency and is usually the preferred option for migrating databases as well as for supporting client traffic that originates from your source network.

•   IPSec VPN   Offers a slower-than-FastConnect, encrypted, secure tunnel over public networks and extends your source network to your VCN.

•   Internet Gateway   The third and least preferable approach is to connect your source network with your VCN through an Internet gateway, passing network traffic over the less secure, unpredictable public Internet.

For very large databases or when migrating to DBCS from a source environment with poor network infrastructure, OCI offers a Data Transfer Service.

Data Transfer Service

Consider a 500TB database on-premises that you wish to migrate to DBCS. You may set up a VCN with a DRG and a 10 Gbps FastConnect connection (fastest available as of this writing) to your on-premises network. Regardless of the migration strategy selected, moving 500TB of data over the 10 Gbps infrastructure takes almost five days. This duration gets worse as the database size increases or the network throughput decreases.

Oracle offers an offline, secure data transfer service to upload data to your designated object storage bucket at no additional cost. The approximate, rounded-up time required to transport databases of various sizes, using various presumed sustained network throughput rates, is compared with the data transfer service throughput in Table 6-6. FastConnect (10 Gbps) is preferred in most cases when it is available.

Images

Table 6-6 Comparison of Wired Data Transport with Offline Data Transfer Service

Using the data transfer service to transport data to OCI object storage entails several steps:

1.   A transfer job is created through the OCI console by navigating to Object Storage | Data Transfer | Create Transfer Job. Provide a descriptive job name and select the object storage upload bucket that will ultimately contain the transferred data.

2.   Data Transfer Appliances are Oracle-owned equipment capable of storing up to 150TB of data but are not available in all regions. Oracle ships the appliance to you. You transfer your data onto the appliance and send it back to Oracle.

3.   The default Data Transfer Disk option entails that you purchase hard drives, which are returned once the data has been uploaded. The Disk option is available in all regions. Select Create Transfer Job.

4.   When using the disk option, determine the storage required and attach a hard disk drive (HDD) to a host with access to the data to be transferred.

5.   Download and use the Data Transfer Utility (DTU) to create an encrypted transfer disk.

6.   Copy data to be transferred to the hard drives to the mount point created by the DTU.

7.   Generate a Manifest file to create an inventory of all files to be transferred. A Dry Run report may be generated to review the transfer results to ensure there are no duplicate files or naming collision issues.

8.   Use the DTU to lock the transfer disk. This step unmounts the disk and deletes the encryption passphrase from the host.

9.   If additional HDDs are required, repeat the previous steps after attaching the HDD to the host.

10.   The OCI console or DTU may be used to create a transfer package which consists of one or more transfer disks created by DTU. Figure 6-14 shows a Data Transfer Job in the OCI console.

Images


Figure 6-14   Create a transfer job to use offline data transfer service.

11.   Initiate the shipment of the package. Once the shipment is received by Oracle, the data is uploaded to your designated object storage bucket, before the HDDs are returned to you.

Approaches to Migration

The conversation now turns to determining which migration method is best suited for moving your source databases to target databases hosted on Oracle Cloud. While you may provision OCI infrastructure to install and host a database using traditional on-premises methods, you do not benefit from the tooling and automation offered for databases on DBCS. The converse of this is that you are not restricted by the limitations of DBCS. This section focuses on migrating to DBCS, ExaCS, ATP, and ADW. The following are the primary factors that inform the migration approach:

•   Source database version   Only 11.2.0.4 and later versions of Oracle databases are supported on DBCS. Generally, the later the version of the source database, the wider the choice of migration options.

•   Special one-off patches   If your database requires any specialty patches (“one-off” patches), this may prevent the use of DBCS.

•   Source database platform   DBCS infrastructure resides on x86 servers supporting a byte ordering system known as little-endian. Platforms such as Solaris on Sparc and AIX on IBM Power use a big-endian byte order. When migrating big-endian platforms to DBCS, a byte order conversion is required, either on the source or target environment.

•   Source database size   It may take an unacceptably long time to migrate extremely large databases over networks, especially when network conditions are poor and FastConnect options are limited. OCI provides a free data transfer service that may solve this problem. Smaller databases lend themselves to network-based migration techniques.

•   Database options   You may have EE licenses on premises with the RAC option. If you plan to implement RAC on OCI, an EE-EP license is required. If you end up using an EE-EP license for your DBCS database, you may implicitly acquire additional options that should be factored into your migration strategy. EE-EP includes many powerful software options that may be used to reduce storage and improve performance and availability.

•   Source database character sets   Not all database character sets are available on DBCS. Incompatible character sets may limit your migration options.

•   Source database block size   All DBCS databases use a standard 8K default block size. Migrating databases with non-8K default block sizes may limit your migration options.

•   Multitenancy   The option to use a single container database (CDB), with multiple pluggable databases (PDB), has been available since database version 12c and is a separately licensable option on-premises if your CDB has more than one tenant (PDB). When migrating to DBCS on VM, you may want to consider using the multitenant option (available with EE-HP and EE-EP) to host multiple databases (PDBs), as DBCS on VM only supports a single database (which could be a CDB) in a single Oracle Home.

Images

CAUTION    Oracle DBA knowledge is required when performing migrations. The migration methods discussion assumes the reader is familiar with Data Pump, RMAN, PDBs, SQL Developer, and SQL*Loader.

Database migrations usually occur within a framework of competing objectives:

•   Cost   Often the single largest determinant of the migration strategy. While using change data capture and logical replication solutions may provide zero downtime migrations, these come with a hefty price tag that may be difficult to justify for migrating a low-impact database where long migration windows are possible.

•   Complexity   Simple is usually better. To avoid costs and sometimes increase migration speeds, custom scripts and tweaks may be introduced, increasing non-standardization, introducing more room for error, and requiring more testing.

•   Speed   Fastest is usually expensive. See the earlier connectivity discussion. High-speed online data transport solutions may be used to reduce migration times but are usually more costly.

•   Reliability   To migrate many databases in what is known as “lift and shift” migrations, it is common practice to cluster source database systems with similar characteristics and build reusable templates or recipes. Reliability is increased when tasks are repeatable with predictable outcomes. Introduce some automation and you may improve reliability. Go too far and you tip the scales and complexity begins to damage reliability.

•   Availability   When migrating non-production databases, maintenance windows tend to be longer since more downtime is tolerated than for production databases. Migration strategies that reduce or completely eliminate downtime increase availability of systems being migrated but are usually costly and complex.

There are many Oracle and third-party tools and solutions that may be used to migrate Oracle databases to DBCS. It is prudent to balance the often-competing objectives listed earlier to find an appropriate migration strategy for your databases. The tools and techniques discussed in the material that follows are a subset of the Oracle toolset and are well suited for migrating Oracle databases across a wide spectrum of differences. Some techniques and tools are version agnostic, while others have strict version and platform requirements.

Database migration to support cloud adoption is an area of constant innovation. Oracle is aggressively updating its tooling and automation to drive adoption of DBCS and OCI. Instead of focusing only on what is currently supported and automated, this section takes a broader view and discusses migration concepts. Implementation steps using each approach are varied and regularly updated. A presentation of the primary techniques is provided within the context that the detailed implementation steps are evolving.

Oracle automated migration solutions may be grouped into the following categories: Zero Downtime Migration (ZDM), Maximum Availability Architecture (MAA), Data Transfer Service (discussed earlier), SQL Developer, RMAN, Data Pump, Plug/Unplug, and Remote Cloning.

•   Zero Downtime Migration (ZDM) makes use of logical replication tools such as Oracle GoldenGate or Active Data Guard to minimize downtime, achieving near-zero actual downtime during migration.

•   Maximum Availability Architecture (MAA) is composed of several high-availability tools and approaches, including a simple Data Guard migration as well as an advanced migration approach that performs an upgrade, as well as a TDE conversion from both on-premises and AWS EC2 sources. The MAA solution also includes cross-platform DB migration using the Zero Data Loss Recovery Appliance (ZDLRA), which supports minimal downtime while migrating across platform endianness. GoldenGate may be used to migrate to ADW and ATP with zero downtime.

•   Oracle SQL Developer is a freely downloadable, feature-rich, integrated development environment that supports SQL and PL/SQL development, database management, and a powerful wizard-driven migration platform geared to move Oracle databases and an increasing number of third-party databases to Oracle Cloud databases.

•   Data Pump supports the export and import of data out of and into Oracle databases. Except for network mode Data Pump, exported data is unloaded from your Oracle tables into dump files. Dump files may be imported into any Oracle target database. The destination database may even be the source database if you wish to restore a table to previous time or create a new table or schema based on the contents of an export dump file. Data Pump is commonly used as an additional logical backup mechanism and individual objects are easily restored in case of any user errors. Data Pump evolved from the legacy Oracle export and import utilities, which are still available, and is scalable and resilient and supports many data types and parallel loading and unloading. You may export your on-premises database schemas using a specific SCN to create a consistent export backup, copy into object storage, and import into any existing DBCS or ATP or ADW database. For DBCS databases, the export dump files could additionally be placed on the DB node file system or even into the FRA and ASM storage.

•   RMAN full backups of source databases may be restored on target cloud databases. Iterative incremental backups of the primary and restores and recovery on the target may be used to considerably reduce migration downtimes. RMAN forms the basis for duplicating databases from active databases or backups as well as for cross-platform transportable tablespace backup sets and PDBs and transportable tablespaces when used with Data Pump for handling metadata exports and imports.

•   Oracle Cloud multitenant target databases (12c and later) support two migration procedures known as Unplug/Plug and remote cloning. These are further differentiated into Unplug/Plug for PDBs or non-CDB and remote cloning for PDBs and non-CDBs.

•   Unplug/Plug for PDBs involves unplugging a source PDB from its parent container and plugging it into a cloud-based CDB. A non-CDB source database may also be plugged into a cloud-based CDB as a new PDB.

•   Remote cloning refers to migrating a PDB or non-CDB to a target CDB over the network.

Images

NOTE    Exadata Cloud at Customer (ExaCC) is mentioned in this chapter as a cloud migration target using the Oracle recommended solutions but this hybrid cloud on-premises solution is beyond the scope of this book. In a nutshell, an ExaCC is based on an Exadata engineered system with a control plane that runs software that abstracts the lower-level machine infrastructure and limits user interaction with the machine to APIs and a cloud UI. The infrastructure is also supported by Oracle Cloud operations support.

These migration solutions are appropriate under different and sometimes overlapping circumstances. Table 6-7 outlines these migration solutions appropriate for moving a source database to a target database system.

Images

Table 6-7 Migration Solutions and Applicable Source and Target Versions and Systems

For example, if you have an 11.2.0.4 production database on AIX on-premises and you wish to migrate it to an Exadata cloud service database, you could use several of these solutions. Data Guard is not an option because there is no AIX on Oracle Cloud. ZDM is possible with GoldenGate, which does require additional software licenses. SQL Developer is an option, but for production databases, especially very large databases, there may be faster options that will reduce downtimes. Data Pump is an option, but again may take a long time, which leads to long migration windows. RMAN is probably a good solution, if GoldenGate is unavailable, as it supports cross-platform conversion and may be used to minimize the migration downtime window.

You may alternatively use any number of tools and techniques to migrate a subset or your entire database to an Oracle Cloud system. The following topics describe several tools and techniques at a conceptual level. These approaches are not exhaustive and represent a subset of available migration options.

SQL*Loader

SQL*Loader is an Oracle utility used to load data in external formatted files into a database. Data from the source system is extracted into flat files. A SQL*Loader control file is configured to specify the source flat files and mappings between flat file data and pre-existing schema tables and columns. SQL*Loader, also known as a bulk loader, uses rules and mappings specified in the control file to read flat files and write the data into database objects. Rows not conforming with the rules are placed in either discard files or bad files (errors were encountered while trying to load these rows).

SQL*Loader prerequisites include the following:

•   Data in an external text file to be loaded must be formatted consistently.

•   Target tables must be in place before the data loading begins.

Pros include the following:

•   Data source can be any system where text may be extracted.

•   No constraints on source and target Oracle software versions.

•   Supports direct-path loading for faster row inserts.

•   Sophisticated control file specification language supports a wide variety of input file layouts.

Cons include the following:

•   Bulk loading with SQL*Loader effectively rebuilds all tables.

•   Migration durations may be lengthy.

•   Relational constraints and indexes must be enforced and built post loading.

•   Usually unsuitable if source system continues to process data changes during the migration.

Export and Import

Although, the Oracle export and import utilities have been available from the early versions of Oracle, these legacy tools are still widely used more than 30 years after their introduction. Each database installation, regardless of the version, contains the exp and imp binary executable programs. Export is used to unload data, which includes tables, indexes, stored procedures, grants, and most database objects into a proprietary formatted dump file that is readable by the Import utility to load all or subsets of the dump file into an Oracle database.

Like SQL*Loader, these legacy utilities have their place, allowing the migration of older database systems to Oracle Cloud. For example, instead of upgrading an old Oracle 7.3.4 database running on outdated, outmoded, and possibly failing equipment to a later version, through multiple versions, to end up with an 11.2.0.4 database as a source for migration, it may be much simpler to export the required schemas and objects, transfer the dump file to your cloud database, and import the required objects into a new shiny database.

Pros include the following:

•   Most database objects can be exported, so tables, constraints, indexes, grants, stored procedures, database links, and their dependent objects can be easily migrated.

•   All Oracle databases have these utilities, ensuring wide compatibility and unlimited version migration potential.

•   You may export the entire database (full), one or more schemas (owner), or a specific table and optionally choose the dump file to be compressed or even span multiple files of a certain volume size.

•   Utilities may be driven through a parameter file, run interactively, or scripted.

•   Dump files are platform-independent.

Cons include the following:

•   These utilities have been deprecated since version 11g so no new feature enhancements are being added, but they are backward compatible so the 18c import utility can read a dump file exported from an Oracle 7 database using that server’s version of the export utility.

•   Not all data types may be exported—for example, tables with long and raw (pre-blob) columns cannot be exported using exp.

•   Migration durations may be lengthy because the utilities were not designed to efficiently process very large volumes so performance does not scale very well.

•   Usually unsuitable if the source system continues to process data changes during the migration.

Data Guard

Data Guard in DBCS is discussed earlier in this chapter. This discussion describes a scenario using Data Guard physical standby databases to migrate an on-premises database to DBCS. Remember that the primary and standby databases must be on the same version of the database software with the same platform endianness.

•   Configure your VCN to connect to the network where the source database is located.

•   Create a backup of the source database and copy or ship it to an OCI object storage bucket.

•   Create a new DBCS system with the same database name as the source database and shape and drop the database that is automatically created using DBCA, thereby preserving the Oracle Home.

•   Register the backup that was copied to object storage and restore the backup to instantiate the standby database.

•   Configure the network parameters and test that logs are being shipped between the primary and standby databases.

•   Switchover, or role transition to activate the cloud database in the primary role.

RMAN

RMAN offers many options for migration, including the following:

•   Cross-platform Transportable PDB A 12c or later PDB on an endian and character set compatible source system may be easily migrated to a PDB on an existing DBCS deployment using this method. You have to unplug the source PDB to generate an XML file with the datafile metadata and use the RMAN command: backup for transport pluggable database. Both the XML file and backup set must be securely copied to the DBCS compute node where the RMAN restore all foreign datafiles command extracts the datafiles. The SQL command create pluggable database creates a new PDB on DBCS ready to be opened.

•   Cross-platform Transportable Tablespace Backup Sets Tablespaces from a 12c or later database on an endian and character set compatible source system may be easily migrated to an existing DBCS deployment using this method. The user tablespaces on the source databases must be placed in read-only mode before an RMAN tablespace backup is taken with the TO PLATFORM or FOR TRANSPORT clause along with the DATAPUMP clause to create a backupset and export dump file that must be securely copied to the DBCS compute node. The target DBCS system must be prepared by creating schemas required by objects from the tablespaces that were exported before using RMAN to restore the backupset with the foreign file specification clause.

•   Transportable Tablespace with Data Pump Create a transportable tablespace set using the RMAN command, transport tablespace <tablespaces> tablespace destination '/dest' auxiliary destination '/auxdest', which must be securely copied to the DBCS compute node. The target environment may require further preparations by creating schemas required by objects from the tablespaces that were exported before using the Data Pump import command with the TRANSPORT_DATAFILES option to import the source data.

Data Pump

The Data Pump utilities are a modernized replacement of the venerable Export (exp) and Import utilities (imp) and was introduced in version 10g. Data Pump utilities Export (expdp) and Import (impdp) are very powerful, supporting exports and imports of entire databases, schemas, and objects, in addition to providing the backbone driver for external tables, supporting network mode data transfer as well as supporting transportable tablespaces and transportable databases. Dump files are platform-independent, support encryption and compression, and may contain data, metadata, or both content types. The utilities are architected to be resilient, supporting interruption and continuation of tasks, very large data volumes, and sophisticated logging features. Both imports and exports may be done in parallel to speed up migration times.

Data Pump offers several migration options, including the following:

•   Full export and import   Source databases may be easily exported and imported into target databases. Dump files are platform-independent so an 11.2.0.3 database on AIX could be exported with Data Pump and imported into a target database on Linux using version 18c.

•   Schema and object export and import   Schemas and even specific objects may be exported and potentially be remapped into different schemas during import.

•   Network mode export and import   This is a particularly powerful mechanism that uses database links to export data from one database and import into another without creating dump files.

•   Transportable tablespaces   Entire tablespaces are exported in one go. The mechanics involve setting the source tablespaces to read-only mode, exporting some metadata with Data Pump, securely copying the datafiles to the compatible (little endian) target database, setting the source tablespaces back to read-write mode, and importing the metadata dump file into the target database. This approach is much faster than exporting all your source objects in from tablespaces and then importing them because you are physically copying the underlying datafiles to a new database.

•   Full Transportable   This technique works very much like transportable tablespaces but instead of transporting a subset of the tablespaces, all user tablespaces are exported from the source database, datafiles are securely copied, and the tablespaces are imported into the target. Full Transportable was designed with 12c target databases, including PDBs, in mind and is a fast, powerful method for migrating endian-compatible databases.

Multitenant Migration Approaches

The multitenant option allows several novel approaches to migrate 12c or higher PDBs and non-CDBs to 12c or higher PDB targets in the Cloud. These migration techniques leverage the architectural segregation of administration and user tablespaces in multitenant databases. Source and target platforms must be endian compatible (little endian) and have compatible character sets.

•   Unplugging and plugging a PDB   Close and unplug the source PDB, generating an XML metadata file. Securely copy the datafiles to be plugged into the target CDB as well as the XML file to a compute node of the target system. Create a new PDB in the target CDB by referencing the source PDB files.

•   Unplugging and plugging a non-CDB   Set the source non-CDB to read-only mode and generate an XML metadata file using the DBMS_PDB.DESCRIBE procedure. Securely copy the datafiles to be plugged into the target CDB, as well as the XML file to a compute node of the target system. Create a new PDB in the target CDB by referencing the source PDB files. Execute the noncdb_to_pdb.sql script to clean up the SYSTEM tablespace of the new PDB.

•   Cloning a remote PDB or non-CDB   This method conceptually works almost identically to the previous two techniques, but instead of securely copying datafiles to the target, data is transferred using a database link between the source and target databases. Once the prerequisites are in place, the following SQL statement is all that is required to create a clone PDB pdbclone from the remote source PDB or non-CDB dbsrc:

create pluggable database pdbclone from dbsrc@dblink2dbsrc;

SQL Developer

SQL Developer is in a state of constant evolution with many recent features geared toward Oracle Cloud adoption. This is evident from the ease of connecting to Oracle ATP, ADW, DBCS, and other third-party databases both in the Cloud and on-premises. Having a single tool with simultaneous connectivity to multiple databases is very handy for data migration. SQL Developer has a multitude of features and wizards geared toward migration including tools that enable you to create Data Pump exports and even automate script generation by capturing changes from several non-Oracle databases to reduce the migration effort. SQL Developer also has specific cloud migration wizards that include the following:

•   AWS Redshift to ADW Cloud   The AWS Redshift to ADW Cloud database migration tool connects to both your Redshift and ADW databases and simplifies the migration process.

•   SQL Developer Carts   The familiar online shopping cart concept is available in SQL Developer, providing a framework to pick and choose objects from your databases and add these to a cart. SQL Developer connections must be established to all source databases. The cart is an XML file with references to the source objects. You may have many carts. When you are ready to migrate the contents of a cart, you choose whether DDL or data or both are to be exported for each object in the cart, and specify export options, an export file data format and an output file or file set. The output files are either database object metadata or actual row data in scripts that may be run on the target database to complete the migration.

A popular data export format, as shown in Figure 6-15, generates INSERT statements for the objects in the cart chosen for data export. These options also generate the DDL required for the cart objects. The output SQL script may be run against a target database, where tables are created, rows are inserted, constraints are enforced, and referential integrity is maintained with minimal DBA effort. Clearly, however, this is an option that is only practical for very small databases.

Images


Figure 6-15   Exporting SQL Developer cart objects for migration

Another interesting SQL Developer cart export format is loader, which generates a set of SQL*Loader-compatible control files and data files. These may be securely copied to a target database compute node, where SQL*Loader may be invoked to load the exported data.

Chapter Review

This chapter discussed various formats for using Oracle databases on OCI. Creating and managing databases using DBCS exposed you to the automation Oracle has developed to simplify administration on VM, bare metal, and Exadata cloud services. Typically time-consuming tasks such as setting up RAC and Data Guard or configuring backups are available as API commands or a few clicks in the console.

Oracle database processor-based and NUP metrics were introduced along with the license cost implications when using DBCS compared to other platforms. An in-depth discussion of the RAC and Data Guard high availability features was also provided.

Autonomous databases are a game-changing offering. ADBs were explored indicating that ATP and ADW are just differently configured ADBs. While there are some limitations and restrictions with ADBs, there are very compelling reasons for adopting ADB, especially the CPU core and storage scaling as well as the AI-driven automations.

Migrating databases to the Cloud is a complex topic and a significant chunk of this chapter is dedicated to this topic. Connectivity options as well as the data transfer service were discussed before a general discussion of approaches to migration was undertaken. Source database properties are a key determinant of the migration strategy. Source and target software versions, platform endianness, and database service-level objectives further inform the approach taken when planning a migration.

The chapter concluded with a discussion of several cloud migration approaches, including legacy utilities such as SQL*Loader and the original Oracle Export and Import tools that were discussed to showcase available options for moving off legacy Oracle databases. Modern approaches to database migration using Data Guard, RMAN, Data Pump, unplugging and plugging PDBs and non-CDBs as well as using remote PDB cloning techniques were also discussed. An exploration of several SQL Developer tools rounded off the migration conversation. Oracle is constantly innovating to drive Cloud adoption, so stay updated with the latest automated migration solutions.

With your network, compute instances, storage options, and databases in place, you are ready to automate the deployment of the infrastructure components available in OCI, which is the subject of Chapter 7.

Questions

1.   List the Oracle Cloud database solutions that support CPU core and storage scaling.

A.   DBCS–VM

B.   ExaCS

C.   ADB

D.   DBCS–Bare metal

2.   List the Oracle Cloud database solutions that support storage scaling.

A.   DBCS–VM

B.   ExaCS

C.   ADB

D.   DBCS–Bare metal

3.   List the Oracle Cloud database solutions that include Transparent Data Encryption (TDE).

A.   DBCS–VM

B.   ExaCS

C.   ADB

D.   DBCS–Bare metal

4.   When configuring a DB using Exadata Cloud Service, you may elect to configure a SPARSE disk group. Choose any statements that are true about configuring a SPARSE disk group.

A.   SPARSE disk groups are mandatory.

B.   SPARSE disk groups reduce available storage for database files.

C.   SPARSE disk groups enable snapshot standby databases.

D.   SPARSE disk groups may be used for Exadata snapshot databases.

5.   When configuring a DB using Exadata Cloud Service, you may elect to store backups in the FRA on disk. Which statement is true regarding usable storage?

A.   Space allocated for backups in the FRA has no impact on storage available for database files.

B.   ExaCS backups are stored in an ACFS volume.

C.   Space allocated for backups in the FRA reduces the space available for the DATA disk group.

D.   ExaCS backups are stored in object storage.

6.   Which of the following statements are true?

A.   Oracle database licenses are free in OCI.

B.   On-premises Oracle database licenses may be used on OCI.

C.   Oracle database licenses are not required on Oracle Cloud.

D.   DBCS and ADB systems may be procured with a license-included option.

7.   Which is not a valid Data Guard configuration mode?

A.   Maximum Performance

B.   Maximum Security

C.   Maximum Protection

D.   Maximum Availability

8.   Which of the following statements are true?

A.   A two-node RAC database consists of two copies of the database files.

B.   A two-node RAC database consists of one shared copy of the database files.

C.   A two-node RAC Data Guarded database consists of two copies of the database files.

D.   A two-node RAC Data Guarded database consists of four copies of the database files.

9.   Which of the following statements is false?

A.   ATP and ADW are types of autonomous databases.

B.   ADBs use DB Resource Manager for defining differentiated services.

C.   TPURGENT and TP are services available only for ADW.

D.   TPURGENT and TP are services available only for ATP.

10.   Which benefits are associated with autonomous databases?

A.   Automatic Migration

B.   Automatic Indexing

C.   Automatic Tuning

D.   Automatic Backups

Answers

1.   C. CPU and storage may be scaled up on autonomous databases.

2.   A, C. Storage may be scaled up and down on autonomous databases but scaled up on DBCS on VM.

3.   A, B, C, D. Transparent Data Encryption (TDE) is a feature of the Advanced Security option. However, on OCI, all database editions include TDE.

4.   B, D. While configuring SPARSE disk groups does reduce the storage available for database files, they are required for hosting Exadata snapshot databases.

5.   C. Usable storage for database files on Exadata is impacted by whether you choose to keep backups on the storage.

6.   B, D. The practice of using on-premises Oracle database licenses in the Cloud is known as “Bring Your Own License” or BYOL. License-included and BYOL are the two methods for licensing Oracle databases in the Cloud.

7.   B. Data Guard may be configured in Maximum Performance, Availability, and Protection modes.

8.   B, C. RAC databases share one copy of the database files. Data Guarded databases have two sets, one for the primary and another for the standby.

9.   C. TPURGENT and TP are services available only for ATP.

10.   B, C, D. Automatic Indexing, Tuning, and Backups are benefits associated with autonomous databases.