摘要:
A system and methods for simulating the performance (e.g., miss rate) of one or more caches. A cache simulator comprises a segmented list of buffers, with each buffer configured to store a data identifier and an identifier of the buffer's segment. Data references, which may be copied from an operational cache, are applied to the list to conduct the simulation. Initial estimates of each cache's miss rate include the number of references that missed all segments of the list plus the hits in all segments not part of the cache. A correction factor is generated from the ratio of actual misses incurred by the operational cache to the estimated misses for a simulated cache of the same size as the operational cache. Final predictions are generated by multiplying the initial estimates by the correction factor. The size of the operational cache may be dynamically adjusted based on the final predictions.
摘要:
A method, apparatus, and system for automatically determining an optimal database subsection is provided. A database subsection is selected to optimize certain benefits when the database subsection is translated, transferred, and cached on an alternative database system, which may utilize a different technology or database engine that provides certain performance benefits compared to the original database system. Algorithms such as multi-path greedy selection and/or dynamic programming may provide optimal or near-optimal results. A host for the alternative database server may be shared with or otherwise located in close physical proximity to improve latency for a database application or client layer. Once the database subsection analysis is completed, a report may be generated and presented to the user, and an implementation script may also be created to automatically configure a client host to function as a cache or replacement system according various cache size configurations described in the report.
摘要:
In column domain dictionary compression, column values in one or more columns are tokenized by a single dictionary. The domain of the dictionary is the entire set of columns. A dictionary may not only map a token to a tokenized value, but also to a count (“token count”) of the number of occurrences of the token and corresponding tokenized value in the dictionary's domain. Such information may be used to compute queries on the base table.
摘要:
A replication track is a designated group of transactions that are to be replicated at a destination database in a way that, with respect to any other transaction in the replication track, preserves transactional dependency. Further, transactions in a replication track can be replicated at the destination database without regard to transactional dependency of other transactions in another track. This facilitates concurrent parallel replication of transactions of different tracks. Replicating data in this manner is referred to herein as track replication. An application may request execution of transactions and designate different tracks for transactions.
摘要:
Allocation of memory is optimized across multiple pools of memory, based on minimizing the time it takes to successfully retrieve a given data item from each of the multiple pools. First data is generated that indicates a hit rate per pool size for each of multiple memory pools. In an embodiment, the generating step includes continuously monitoring attempts to access, or retrieve a data item from, each of the memory pools. The first data is converted to second data that accounts for a cost of a miss with respect to each of the memory pools. In an embodiment, the second data accounts for the cost of a miss in terms of time. How much of the memory to allocate to each of the memory pools is determined, based on the second data. In an embodiment, the steps of converting and determining are automatically performed, on a periodic basis.
摘要:
A partial reverse key index is described, which allows distributed contention as resources vie to insert data into an index as well as allows range scans to be performed on the index. To do so, before an index entry for a key value is inserted into an index, the key value is transformed using a transformation operation that affects a subset of the order of the key value. The index entry is then inserted based on the transformed key value. Because the transformation operation affects the order of the key value, the transformed values associated with two consecutive key values will not necessarily be consecutive. Therefore, the index entries associated with the consecutive key values may be inserted into unrelated portions of the index.
摘要:
An intelligent database infrastructure wherein the management of all database components is performed by and within the database itself by integrating management of various components with a central management control. Each individual database component, as well as the central management control, is self-managing. A central management control module integrates and interacts with the various database components. The database is configured to automatically tune to varying workloads and configurations, correct or alert about bad conditions, and advise on ways to improve overall system performance.
摘要:
A method and apparatus for auto-tuning memory is provided. Memory on a computer system comprises at least one shared memory area and at least one private memory area. Addresses in the shared memory area are accessible to multiple processes. Addresses in the private memory area are dedicated to individual processes. Initially, a division in the amount of memory is established between the shared and private memory areas. Subsequently, a new division is determined. Consequently, memory from one memory area is “given” to the other memory area. In one approach, such sharing is achieved by causing the shared and private memory areas to be physically separate from each other both before and after a change in the division. The division of the amount of memory may be changed to a new division by deallocating memory from one of the memory areas and allocating that memory to the other of the memory areas.
摘要:
Provided herein is a mechanism that allows a given database system to access data blocks from another database system, where data blocks from the given database system and data blocks from the other database system have different sizes. According to an aspect of the present invention, a tablespace in the other database system contained the data blocks. The tablespace is detached from the other database system and integrated into the given database, which is capable of processing data stored in data blocks of a different sizes.
摘要:
A method for simulating different MTTR settings includes determining a simulated MTTR setting and providing a simulated checkpoint queue. The simulated checkpoint queue is associated with the simulated MTTR setting and is an ordered list of one or more elements. Each element represents a buffer, and the ordered list has a head and a tail. The method also includes providing a simulated write counter associated with the simulated MTTR setting. The method further includes, in response to detecting a change to a first buffer, checking if the first buffer is represented in the simulated checkpoint queue. If the first buffer is not represented in the simulated checkpoint queue, an element that represents the first buffer is linked to the tail of the simulated checkpoint queue. An MTTR advisory system includes a memory, one or more processors coupled to the memory, a simulated MTTR setting, a simulated checkpoint queue, and a simulated write counter. The simulated MTTR setting is maintained in the memory. The simulated checkpoint queue is maintained in the memory and associated with the simulated MTTR setting. The simulated write counter is also maintained in the memory, and is associated with the simulated MTTR setting. The simulated write counter provides a count of the number of times an element is removed from the simulated checkpoint queue, wherein the element is removed from the simulated checkpoint queue in response to a write out of a buffer from volatile memory and storing in nonvolatile memory.