摘要:
A database containing datafiles is partitioned into a set of tablespaces. Every disk pointer pointing to a data item in a datafile refers to a tablespace-relative file number for the datafile. Data pointed to by a tablespace-relative disk pointer is retrieved by first checking the cache, and upon a cache miss, the tablespace-relative file number is translated into an absolute file number according to a latch-free look up technique.
摘要:
A method for preventing multiple pings. An embodiment of the invention detects requests of data blocks entailing pings likely to cause additional pings. The servicing of requests involving a pings likely to cause additional pings is deferred until a service enabling conditions occurs. Another embodiment of the invention detects situations where by further updating a data block before pinging the data block use of resources on the remote node requesting the data block are reduced. The servicing of the request for the data block is deferred until a service enabling conditions occurs.
摘要:
A method and apparatus for replacing data in a list of buffers is provided. The list of buffers has a hot end and a cold end. The buffers at the hot end are maintained in a FIFO list and the buffers at the cold end are maintained in an LRU list. Requested data is located and, if the requested data is located in the LRU portion of the buffer list, the buffer containing the requested data is moved to the head of the FIFO list. If the data is located in a buffer in the FIFO portion of the buffer list, no rearrangement is required. If the requested data is not located in the buffer list, the data is stored into the buffer at the tail end of the LRU list, then the buffer is moved to the head of the FIFO list.
摘要:
Techniques are provided for using an intermediate cache to provide some of the items involved in a scan operation, while other items involved in the scan operation are provided from primary storage. Techniques are also provided for determining whether to service an I/O request for an item with a copy of the item that resides in the intermediate cache based on factors such as a) an identity of the user for whom the I/O request was submitted, b) an identity of a service that submitted the I/O request, c) an indication of a consumer group to which the I/O request maps, d) whether the I/O request is associated with an offloaded filter provided by the database server to the storage system, or e) whether the intermediate cache is overloaded. Techniques are also provided for determining whether to store items in an intermediate cache in response to the items being retrieved, based on logical characteristics associated with the requests that retrieve the items.
摘要:
Techniques for maintaining a cascading index are provided. In one approach, one or more branch node compression techniques are applied to the main index of a cascading index. In an approach, a Bloom filter is generated and associated with, e.g., a branch node in the main index. The Bloom filter is used to determine whether, without accessing any leaf blocks, a particular key value exists, e.g., in leaf blocks associated with the branch node. In an approach, a new redo record is generated in response to a merge operation between two levels of the cascading index. The new redo record comprises (a) one or more addresses of blocks that are affected by the merge operation, (b) data is that being “pushed down” to a lower level of the cascading index, and (c) one or more addresses of blocks that are written to disk storage as a result of the merge operation.
摘要:
A system and method for enabling a second database instance to more quickly process a request to execute a database statement that has previously been executed by a first database instance is described. In one embodiment, the method involves sending the database statement from the first database instance to the second database instance, and generating by the second database instance one or more structures needed to prepare the statement for execution, such as a parse tree and an execution plan for the statement. If at some point in the future, the second database instance receives a request to execute the same statement, the above structures can be used for execution, thereby eliminating the need for one or more potentially time-consuming operations, such as generation of a parse tree or execution plan for the statement.
摘要:
A system and method for enabling a second database instance to more quickly process a request to execute a database statement that has previously been executed by a first database instance is described. In one embodiment, the method involves sending the database statement from the first database instance to the second database instance, and generating by the second database instance one or more structures needed to prepare the statement for execution, such as a parse tree and an execution plan for the statement. If at some point in the future, the second database instance receives a request to execute the same statement, the above structures can be used for execution, thereby eliminating the need for one or more potentially time-consuming operations, such as generation of a parse tree or execution plan for the statement.
摘要:
Allocation of memory is optimized across multiple pools of memory, based on minimizing the time it takes to successfully retrieve a given data item from each of the multiple pools. First data is generated that indicates a hit rate per pool size for each of multiple memory pools. In an embodiment, the generating step includes continuously monitoring attempts to access, or retrieve a data item from, each of the memory pools. The first data is converted to second data that accounts for a cost of a miss with respect to each of the memory pools. In an embodiment, the second data accounts for the cost of a miss in terms of time. How much of the memory to allocate to each of the memory pools is determined, based on the second data. In an embodiment, the steps of converting and determining are automatically performed, on a periodic basis.
摘要:
Techniques for compressing branch nodes in an index are provided. The branch nodes may be part of a main index of a multi-level index that also includes one or more journal indexes. A Bloom filter may be generated and associated with, e.g., a branch node in the main index. The Bloom filter is used to determine whether, without accessing any leaf blocks, a particular key value exists, e.g., in leaf blocks associated with the branch node.
摘要:
A partial reverse key index is described, which allows distributed contention as resources vie to insert data into an index as well as allows range scans to be performed on the index. To do so, before an index entry for a key value is inserted into an index, the key value is transformed using a transformation operation that affects a subset of the order of the key value. The index entry is then inserted based on the transformed key value. Because the transformation operation affects the order of the key value, the transformed values associated with two consecutive key values will not necessarily be consecutive. Therefore, the index entries associated with the consecutive key values may be inserted into unrelated portions of the index.