Abstract:
Techniques for processing changes in a cluster database system are provided. A first instance in the cluster transfers a data block to a second instance in the cluster before a redo record that stores one or more changes that the first instance made to the data block is durably stored. The first instance also transfers, to the second instance, a block change timestamp that indicates when a redo record for the one or more changes was generated by the first instance. The first instance also separately sends, to the second instance, a last store timestamp that indicates when the last redo record that was durably stored was generated by the first instance. The block change timestamp and the last store timestamp are used by the second instance when creating redo records for changes (made by the second instance) that depend on the redo record generated by the first instance.
Abstract:
A hashing scheme includes a cache-friendly, latchless, non-blocking dynamically resizable hash index with constant-time lookup operations that is also amenable to fast lookups via remote memory access. Specifically, the hashing scheme provides each of the following features: latchless reads, fine grained lightweight locks for writers, non-blocking dynamic resizability, cache-friendly access, constant-time lookup operations, amenable to remote memory access via RDMA protocol through one sided read operations, as well as non-RDMA access.
Abstract:
A method and apparatus for implementing a buffer cache for a persistent file system in non-volatile memory is provided. A set of data is maintained in one or more extents in non-volatile random-access memory (NVRAM) of a computing device. At least one buffer header is allocated in dynamic random-access memory (DRAM) of the computing device. In response to a read request by a first process executing on the computing device to access one or more first data blocks in a first extent of the one or more extents, the first process is granted direct read access of the first extent in NVRAM. A reference to the first extent in NVRAM is stored in a first buffer header. The first buffer header is associated with the first process. The first process uses the first buffer header to directly access the one or more first data blocks in NVRAM.
Abstract:
Techniques related to a server-side extension of client-side caches are provided. A storage server computer receives, from a database server computer, an eviction notification indicating that a data block has been evicted from the database server computer's cache. The storage server computer comprises a memory hierarchy including a volatile cache and a persistent cache. Upon receiving the eviction notification, the storage server computer retrieves the data block from the persistent cache and stores it in the volatile cache. When the storage server computer receives, from the database server computer, a request for the data block, the storage server computer retrieves the data block from the volatile cache. Furthermore, the storage server computer sends the data block to the database server computer, thereby causing the data block to be stored in the database server computer's cache. Still further, the storage server computer evicts the data block from the volatile cache.
Abstract:
A method and apparatus for implementing a buffer cache for a persistent file system in non-volatile memory is provided. A set of data is maintained in one or more extents in non-volatile random-access memory (NVRAM) of a computing device. At least one buffer header is allocated in dynamic random-access memory (DRAM) of the computing device. In response to a read request by a first process executing on the computing device to access one or more first data blocks in a first extent of the one or more extents, the first process is granted direct read access of the first extent in NVRAM. A reference to the first extent in NVRAM is stored in a first buffer header. The first buffer header is associated with the first process. The first process uses the first buffer header to directly access the one or more first data blocks in NVRAM.
Abstract:
Techniques herein store database blocks (DBBs) in byte-addressable persistent memory (PMEM) and prevent tearing without deadlocking or waiting. In an embodiment, a computer hosts a DBMS. A reader process of the DBMS obtains, without locking and from metadata in PMEM, a first memory address for directly accessing a current version, which is a particular version, of a DBB in PMEM. Concurrently and without locking: a) the reader process reads the particular version of the DBB in PMEM, and b) a writer process of the DBMS replaces, in the metadata in PMEM, the first memory address with a second memory address for directly accessing a new version of the DBB in PMEM. In an embodiment, a computer performs without locking: a) storing, in PMEM, a DBB, b) copying into volatile memory, or reading, an image of the DBB, and c) detecting whether the image of the DBB is torn.
Abstract:
Techniques are provided for enabling a requesting entity to retrieve data that is managed by a database server instance from the volatile memory of a server machine that is executing the database server instance. The techniques allow the requesting entity to retrieve the data from the volatile memory of the host server machine without involving the database server instance in the retrieval operation. Because the retrieval does not involve the database server instance, the retrieval may succeed even when the database server instance has stalled or become unresponsive. In addition, direct retrieval of data using the techniques described herein will often be faster and more efficient than retrieval of the same information through conventional interaction with the database server instance.
Abstract:
A hashing scheme includes a cache-friendly, latchless, non-blocking dynamically resizable hash index with constant-time lookup operations that is also amenable to fast lookups via remote memory access. Specifically, the hashing scheme provides each of the following features: latchless reads, fine grained lightweight locks for writers, non-blocking dynamic resizability, cache-friendly access, constant-time lookup operations, amenable to remote memory access via RDMA protocol through one sided read operations, as well as non-RDMA access.