INCREASING OLTP THROUGHPUT BY IMPROVING THE PERFORMANCE OF LOGGING USING PERSISTENT MEMORY STORAGE

    公开(公告)号:US20240045591A1

    公开(公告)日:2024-02-08

    申请号:US17880446

    申请日:2022-08-03

    CPC classification number: G06F3/061 G06F3/0646 G06F3/0683

    Abstract: In an embodiment, before modifying a persistent ORL (ORL), a database management system (DBMS) persists redo for a transaction and acknowledges that the transaction is committed. Later, the redo is appended onto the ORL. The DBMS stores first redo for a first transaction into a first PRB and second redo for a second transaction into a second PRB. Later, both redo are appended onto an ORL. The DBMS stores redo of first transactions in volatile SRBs (SLBs) respectively of database sessions. That redo is stored in a volatile shared buffer that is shared by the database sessions. Redo of second transactions is stored in the volatile shared buffer, but not in the SLBs. During re-silvering and recovery, the DBMS retrieves redo from fast persistent storage and then appends the redo onto an ORL in slow persistent storage. After re-silvering, during recovery, the redo from the ORL is applied to a persistent database block.

    Elimination of log file synchronization delay at transaction commit time

    公开(公告)号:US11768820B2

    公开(公告)日:2023-09-26

    申请号:US16790961

    申请日:2020-02-14

    Inventor: Yunrui Li

    CPC classification number: G06F16/2358 G06F16/2365 G06F16/2379

    Abstract: A method and apparatus for elimination of log file synchronization delay at transaction commit time is provided. One or more change records corresponding to a database transaction are generated. One or more buffer entries comprising the one or more change records are entered into a persistent change log buffer. A commit operation is performed by generating a commit change record corresponding to the database transaction and entering a commit buffer entry comprising the commit change record into the persistent change log buffer. The commit operation returns without waiting for the commit change record to be recorded in a change record log file.

    Time-based checkpoint target for database media recovery

    公开(公告)号:US11048599B2

    公开(公告)日:2021-06-29

    申请号:US16215046

    申请日:2018-12-10

    Abstract: A method, apparatus, and system for a time-based checkpoint target is provided for standby databases. Change records received from a primary database are applied for a standby database, creating dirty buffer queues. As the change records are applied, a mapping is maintained, which maps timestamps to logical times of change records that were most recently applied at the timestamp for the standby database. On a periodic dirty buffer queue processing interval, the mapping is used to determine a target logical time that is mapped to a target timestamp that is prior to a present timestamp by at least a checkpoint delay. The dirty buffer queues are then processed up to the target logical time, creating an incremental checkpoint. On a periodic header update interval, file headers reflecting a consistent logical time for the checkpoint are also updated. The intervals and the checkpoint delay are adjustable by user or application.

    Method and system for automatic maintenance of standby databases for non-logged workloads

    公开(公告)号:US11010267B2

    公开(公告)日:2021-05-18

    申请号:US15828146

    申请日:2017-11-30

    Abstract: A computer program product, system, and computer implemented method for automatic maintenance of standby databases for non-logged workloads, the process comprising: maintaining a redo stream of redo records sent from a primary database to a standby database, identifying a change made at the primary database for which a redo record was not created, inserting a placeholder redo record into the redo stream corresponding to the change identified at the primary database for which the redo record was not created, sending, to the standby database, a copy of one or more data blocks corresponding to the change that is associated with the placeholder redo record, receiving the placeholder redo record from the redo stream, identifying the copy of the one or more data blocks sent from the primary database corresponding to the placeholder redo record, and applying the copy of one or more data blocks to update the standby database.

    Cloning a pluggable database in read-write mode

    公开(公告)号:US10922331B2

    公开(公告)日:2021-02-16

    申请号:US15215435

    申请日:2016-07-20

    Abstract: Embodiments create a clone of a PDB while the PDB accepts write operations. While the PDB remains in read-write mode, the DBMS copies the data of the PDB and sends the data to a destination location. The DBMS performs data recovery on the PDB clone based on redo entries that record changes made to the source PDB while the DBMS copied the source PDB files. This data recovery makes all changes, to the PDB clone, that occurred to the source PDB during the copy operation. The redo information, on which the data recovery is based, is foreign to the PDB clone since the redo entries were recorded for a different PDB. In order to apply foreign redo information to perform recovery on the PDB clone, a DBMS managing the PDB clone maintains mapping information that maps PDB source reference information to corresponding information for the PDB clone.

    ELIMINATION OF LOG FILE SYNCHRONIZATION DELAY AT TRANSACTION COMMIT TIME

    公开(公告)号:US20200183910A1

    公开(公告)日:2020-06-11

    申请号:US16790961

    申请日:2020-02-14

    Inventor: Yunrui Li

    Abstract: A method and apparatus for elimination of log file synchronization delay at transaction commit time is provided. One or more change records corresponding to a database transaction are generated. One or more buffer entries comprising the one or more change records are entered into a persistent change log buffer. A commit operation is performed by generating a commit change record corresponding to the database transaction and entering a commit buffer entry comprising the commit change record into the persistent change log buffer. The commit operation returns without waiting for the commit change record to be recorded in a change record log file.

    TIME-BASED CHECKPOINT TARGET FOR DATABASE MEDIA RECOVERY

    公开(公告)号:US20190108107A1

    公开(公告)日:2019-04-11

    申请号:US16215046

    申请日:2018-12-10

    Abstract: A method, apparatus, and system for a time-based checkpoint target is provided for standby databases. Change records received from a primary database are applied for a standby database, creating dirty buffer queues. As the change records are applied, a mapping is maintained, which maps timestamps to logical times of change records that were most recently applied at the timestamp for the standby database. On a periodic dirty buffer queue processing interval, the mapping is used to determine a target logical time that is mapped to a target timestamp that is prior to a present timestamp by at least a checkpoint delay. The dirty buffer queues are then processed up to the target logical time, creating an incremental checkpoint. On a periodic header update interval, file headers reflecting a consistent logical time for the checkpoint are also updated. The intervals and the checkpoint delay are adjustable by user or application.

    Detecting lost writes
    80.
    发明授权

    公开(公告)号:US09892153B2

    公开(公告)日:2018-02-13

    申请号:US14578093

    申请日:2014-12-19

    Abstract: Techniques are described that determine occurrences of lost write by comparing version identifiers of corresponding replica data blocks and checkpoints of data files that include the data blocks. A method determines lost writes that may have occurred among a first set of data blocks and a second set of data blocks. Each data block in the first set of data blocks corresponds to a respective data block in the second set that is a version of data blocks in the first set. The data blocks in the first set and the second set are associated with version identifiers. The second set of data blocks is associated with a second checkpoint for which any version of a data block in the second set associated a version identifier below the second checkpoint has been acknowledged to a database server as having been written to persistent storage. The method proceed to determining the lost writes by determining that a data block in the first set and a data block in the second set satisfy criteria, such as the version identifier of the first data block is between the version identifier of the second data block and the second checkpoint.

Patent Agency Ranking