Abstract:
Techniques are provided for using a sparse file to create a hot archive of a pluggable database of a container database. In an embodiment and while a source pluggable database is in service, a source database server creates a clone of the source pluggable database. Also while the source pluggable database is in service, the source database server creates an archive of the source pluggable database that is based on the clone. Eventually, a need arises to consume the archive. A target database server (which may also be the source database server) creates a target pluggable database based on the archive.
Abstract:
Techniques are provided for using a sparse file to create a hot archive of a pluggable database of a container database. In an embodiment and while a source pluggable database is in service, a source database server creates a clone of the source pluggable database. Also while the source pluggable database is in service, the source database server creates an archive of the source pluggable database that is based on the clone. Eventually, a need arises to consume the archive. A target database server (which may also be the source database server) creates a target pluggable database based on the archive.
Abstract:
Embodiments provide a migration instruction that effectuates the migration of a pluggable database from a source database server instance to a destination database server instance. Upon receiving the migration instruction, the migrating pluggable database is opened at the destination instance. Connections are terminated at the source instance at a rate that is determined based on statistics maintained for one or more of: the migrating pluggable database, the source instance, the destination instance, a container database, etc. Furthermore, once the migration instruction is received, a certain amount of time is provided before the source instance flushes the dirty buffers for the migrating pluggable database from the buffer cache of the source instance. The delay in flushing dirty buffers from buffer cache allows the source instance to provide data blocks, of the migrating pluggable database, directly to the destination database server instance from the cache.
Abstract:
Techniques for common users and roles, and commonly-granted privileges and roles are described. In one approach, the DBMS of a container database allows for the creation of common roles and common users that are shared across the container database. Thus, when a common role or a common user is established, the common role or common user is propagated to each database of the container database. In another approach, the DBMS of a container database allows privileges and roles to be granted commonly or locally. When a privilege or role is granted commonly, the privilege applies in each of the databases of a container database. When a privilege or role is granted locally, the privilege applies only in the database to which the grantor of the privilege or role established a connection.
Abstract:
Provided herein are data cloud administration techniques that achieve autonomy by using a rules engine that reacts to a database system event by autonomously submitting an asynchronous job to reconfigure a database. In an embodiment, a rules engine receives an event from a DBMS. Based on the event, the rules engine executes a rule to generate a request that indicates configuration details for a database. The rules engine sends the request to a request broker. The request broker dispatches an asynchronous job based on the request. The asynchronous job configures the database based on the configuration details. Thus, databases in a cloud, data grid, or data center may be administered autonomously (without human intervention) base on dynamic conditions that are foreseen and unforeseen.
Abstract:
Provided herein are workload management techniques that asynchronously configure pluggable databases within a compute cloud. In an embodiment, the compute cloud receives an administrative request that indicates configuration details for a pluggable database. The compute cloud generates a configuration descriptor that specifies an asynchronous job based on the configuration details of the request. The compute cloud accesses hosting metadata to detect at least one of: a) a current container database that already hosts the pluggable database, b) a target container database that will host the pluggable database, or c) a particular computer that hosts at least one of: the current container database, or the target container database. The compute cloud executes the asynchronous job to configure the pluggable database based on at least one of: the hosting metadata, or the configuration descriptor. Thread pools, lanes, and queues may facilitate load balancing to avoid priority inversion, starvation, and denial of service.
Abstract:
Embodiments create a clone of a PDB while the PDB accepts write operations. While the PDB remains in read-write mode, the DBMS copies the data of the PDB and sends the data to a destination location. The DBMS performs data recovery on the PDB clone based on redo entries that record changes made to the source PDB while the DBMS copied the source PDB files. This data recovery makes all changes, to the PDB clone, that occurred to the source PDB during the copy operation. The redo information, on which the data recovery is based, is foreign to the PDB clone since the redo entries were recorded for a different PDB. In order to apply foreign redo information to perform recovery on the PDB clone, a DBMS managing the PDB clone maintains mapping information that maps PDB source reference information to corresponding information for the PDB clone.
Abstract:
Provided herein are workload management techniques that asynchronously configure pluggable databases within a compute cloud. In an embodiment, the compute cloud receives an administrative request that indicates configuration details for a pluggable database. The compute cloud generates a configuration descriptor that specifies an asynchronous job based on the configuration details of the request. The compute cloud accesses hosting metadata to detect at least one of: a) a current container database that already hosts the pluggable database, b) a target container database that will host the pluggable database, or c) a particular computer that hosts at least one of: the current container database, or the target container database. The compute cloud executes the asynchronous job to configure the pluggable database based on at least one of: the hosting metadata, or the configuration descriptor. Thread pools, lanes, and queues may facilitate load balancing to avoid priority inversion, starvation, and denial of service.
Abstract:
A container database stores redo records and logical timestamps for multiple pluggable databases. When it is detected that a first read-write instance of the pluggable database is opened and no other read-write instances of the pluggable database are open, offline range data associated with the pluggable database is updated. When it is detected that a second read-write instance of the pluggable database is closed, and the second read-write instance is the last open read-write instance, the offline range data associated with the pluggable database is updated. The pluggable database is restored to a logical timestamp associated with a restore request based on the offline range data.
Abstract:
Embodiments incrementally refresh a clone of a source PDB while the source PDB accepts write operations. Specifically, refreshing the PDB clone incorporates changes made to the source PDB since a refresh reference time stamp, which marks the time at which the PDB clone was created or, if the PDB clone has been previously refreshed, the time at which the PDB clone was last refreshed. A PDB clone is incrementally refreshed by incorporating, into the PDB clone data, those source data blocks that have changed since the refresh reference time stamp. Recovery is performed on the PDB clone, once the blocks are copied, to apply any changes made to the source PDB while the blocks were being copied, which recovery makes the PDB clone files consistent. This recovery is based on redo entries recorded for the source PDB during the time it took to copy the blocks to the PDB clone.