Abstract:
The present disclosure relates to adaptively overlapping redo writes. A log writer, while operating in a thin mode, may assign a first log writer group of a plurality of log writer groups to write one or more first redo log records to an online redo log in response to determining that a pipelining parameter is satisfied. The thin mode may be associated with one or more target sizes that are less than one or more target sizes associated with a thick mode. The log writer may determine to operate the thick mode based at least in part on at least a portion of the plurality of log writer groups being unavailable to write one or more second redo log records to the online redo log. The log writer, while operating in the thick mode, may assign a second log writer group of the plurality of log writer groups to write one or more second redo log records from the log buffer to the online redo log in response to determining that an amount of redo log records in the log buffer meets one of the one or more target sizes associated with the thick mode. The log writer, while operating in the thick mode, may assign a third log writer group of the plurality of log writer groups to write one or more second redo log records from the log buffer to the online redo log in response to determining that a highest busy group number meets or exceeds a core threshold.
Abstract:
A shared-nothing database system is provided in which parallelism and workload balancing are increased by assigning the rows of each table to “slices”, and storing multiple copies (“duplicas”) of each slice across the persistent storage of multiple nodes of the shared-nothing database system. When the data for a table is distributed among the nodes of a shared-nothing system in this manner, requests to read data from a particular row of the table may be handled by any node that stores a duplica of the slice to which the row is assigned. For each slice, a single duplica of the slice is designated as the “primary duplica”. All DML operations (e.g. inserts, deletes, updates, etc.) that target a particular row of the table are performed by the node that has the primary duplica of the slice to which the particular row is assigned. The changes made by the DML operations are then propagated from the primary duplica to the other duplicas (“secondary duplicas”) of the same slice.
Abstract:
A computer program product, system, and computer implemented method for automatic maintenance of standby databases for non-logged workloads, the process comprising: maintaining a redo stream of redo records sent from a primary database to a standby database, identifying a change made at the primary database for which a redo record was not created, inserting a placeholder redo record into the redo stream corresponding to the change identified at the primary database for which the redo record was not created, sending, to the standby database, a copy of one or more data blocks corresponding to the change that is associated with the placeholder redo record, receiving the placeholder redo record from the redo stream, identifying the copy of the one or more data blocks sent from the primary database corresponding to the placeholder redo record, and applying the copy of one or more data blocks to update the standby database.
Abstract:
Techniques related to instance recovery using Bloom filters are disclosed. A multi-node node database management system (DBMS) includes a first database server instance and a second database server instance. A recovery set includes a set of data blocks that have been modified by a first database server instance and not persisted. A Bloom filter is generated to indicate whether data blocks are excluded from the recovery set. The Bloom filter is sent to the second database server instance, which determines whether the Bloom filter indicates that a particular data block is excluded from the recovery set. Based on determining that the Bloom filter indicates that the particular data block is excluded from the recovery set, access to the particular data block is granted.
Abstract:
Techniques are provided for using a sparse file to create a hot archive of a pluggable database of a container database. In an embodiment and while a source pluggable database is in service, a source database server creates a clone of the source pluggable database. Also while the source pluggable database is in service, the source database server creates an archive of the source pluggable database that is based on the clone. Eventually, a need arises to consume the archive. A target database server (which may also be the source database server) creates a target pluggable database based on the archive.
Abstract:
Embodiments create a clone of a PDB while the PDB accepts write operations. While the PDB remains in read-write mode, the DBMS copies the data of the PDB and sends the data to a destination location. The DBMS performs data recovery on the PDB clone based on redo entries that record changes made to the source PDB while the DBMS copied the source PDB files. This data recovery makes all changes, to the PDB clone, that occurred to the source PDB during the copy operation. The redo information, on which the data recovery is based, is foreign to the PDB clone since the redo entries were recorded for a different PDB. In order to apply foreign redo information to perform recovery on the PDB clone, a DBMS managing the PDB clone maintains mapping information that maps PDB source reference information to corresponding information for the PDB clone.
Abstract:
A method, apparatus, and system for multi-instance redo apply is provided for standby databases. A multi-instance primary database generates a plurality of redo records, which are received and applied by a physical standby running a multi-instance standby database. Each standby instance runs a set of processes that utilize non-blocking, single-task threads for high parallelism. At each standby instance for the multi-instance redo, the plurality of redo records are merged into a stream from one or more redo strands in logical time order, distributed to standby instances according to determined apply slave processes using an intelligent workload distribution function, reemerged after receiving updates from remote instances, and applied in logical time order by the apply slave processes. Redo apply progress is tracked at each instance locally and also globally, allowing a consistent query logical time to be maintained and published to service database read query requests concurrently with the redo apply.
Abstract:
A pluggable database is transported between a source DBMS and a destination DBMS, in a way that minimizes downtime of the pluggable database. While a copy of the pluggable database is being made at the destination DBMS, transactions continue to execute against the pluggable database at the source DBMS and change the pluggable database. Eventually, the transactions terminate or cease executing. Redo records generated for the transactions are applied to the copy of the pluggable database at the source DBMS. Undo records generated for at least some of the transactions may be stored in a separate undo log and transported to the destination DBMS. The transported pluggable database is synchronized at a destination DBMS in a “pluggable-ready state”, where it may be plugged into the destination container DBMS.
Abstract:
Each of a plurality of Worker processes are allowed to perform any and all of the following tasks involving logged work items: (1) reading a subset of the work items from a log; (2) sequentially ordering work items for corresponding data objects; (3) applying a sequentially ordered set of work items to a corresponding data object; and (4) transmitting a subset of work items to a Worker process running on another database server in a cluster, if necessary. These tasks can be performed concurrently, at will, and as available, by the Worker processes. An improved checkpointing technique eliminates the need for the Worker processes to get to a synchronization point and stop. Instead, a Coordinator process examines the current state of progress of the Worker processes and computes a past point in the sequence of work items at which all work items before that point have been completely processed, and records this point as the checkpoint.
Abstract:
In an embodiment, before modifying a persistent ORL (ORL), a database management system (DBMS) persists redo for a transaction and acknowledges that the transaction is committed. Later, the redo is appended onto the ORL. The DBMS stores first redo for a first transaction into a first PRB and second redo for a second transaction into a second PRB. Later, both redo are appended onto an ORL. The DBMS stores redo of first transactions in volatile SRBs (SLBs) respectively of database sessions. That redo is stored in a volatile shared buffer that is shared by the database sessions. Redo of second transactions is stored in the volatile shared buffer, but not in the SLBs. During re-silvering and recovery, the DBMS retrieves redo from fast persistent storage and then appends the redo onto an ORL in slow persistent storage. After re-silvering, during recovery, the redo from the ORL is applied to a persistent database block.