Abstract:
Methods and apparatus are provided for using machine learning to estimate query resource consumption in a massively parallel processing database (MPPDB). In various embodiments, the machine learning may jointly perform query resource consumption estimation for a query and resource extreme events detection together, utilize an adaptive kernel that is configured to learn most optimal similarity relation metric for data from each system settings, and utilize multi-level stacking technology configured to leverage outputs of diverse base classifier models. Advantages and benefits of the disclosed embodiments include providing faster and more reliable system performance and avoiding resource issues such as out of memory (OOM) occurrences.
Abstract:
Methods and apparatus are provided for using machine learning to estimate query resource consumption in a massively parallel processing database (MPPDB). In various embodiments, the machine learning may jointly perform query resource consumption estimation for a query and resource extreme events detection together, utilize an adaptive kernel that is configured to learn most optimal similarity relation metric for data from each system settings, and utilize multi-level stacking technology configured to leverage outputs of diverse base classifier models. Advantages and benefits of the disclosed embodiments include providing faster and more reliable system performance and avoiding resource issues such as out of memory (OOM) occurrences.
Abstract:
Embodiments of the present technology relate managing database query concurrency. A method of the present technology can include receiving a query, generating a first query plan that can be used to execute the query in system memory without any system memory constraints, and estimating a system memory cost for executing the query in the system memory using the first query plan. The method can also include placing the query in a queue if available system memory does not satisfy the estimated system memory cost. The method can further include conditionally selecting the query from the queue, conditionally generating a second query plan for the query that can be used to execute the query in the system memory in compliance with a system memory constraint, and conditionally executing the query in the system memory.
Abstract:
The present technology relates to managing data caching in processing nodes of a massively parallel processing (MPP) database system. A directory is maintained that includes a list and a storage location of the data pages in the MPP database system. Memory usage is monitored in processing nodes by exchanging memory usage information with each other. Each of the processing nodes manages a list and a corresponding amount of available memory in each of the processing nodes based on the memory usage information. Data pages are read from a memory of the processing nodes in response to receiving a request to fetch the data pages, and a remote memory manager is queried for available memory in each of the processing nodes in response to receiving the request. The data pages are distributed to the memory of the processing nodes having sufficient space available for storage during data processing.
Abstract:
A system and method for parallelizing hash-based operators in symmetric multiprocessing (SMP) databases is provided. In an embodiment, a method in a device for performing hash based database operations includes receiving at the device an database query; creating a plurality of execution workers to process the query; and building by the execution workers a hash table from a database table, the database table comprising one of a plurality of partitions and a plurality of scan units, the hash table shared by the execution workers, each execution worker scanning a corresponding partition and adding entries to the hash table if the database table is partitioned, each execution worker scanning an unprocessed scan unit and adding entries to the hash table according to the scan unit if the database table comprises scan units, and the workers performing the scanning and the adding in a parallel manner.
Abstract:
Embodiments of the present technology relate managing database query concurrency. A method of the present technology can include receiving a query, generating a first query plan that can be used to execute the query in system memory without any system memory constraints, and estimating a system memory cost for executing the query in the system memory using the first query plan. The method can also include placing the query in a queue if available system memory does not satisfy the estimated system memory cost. The method can further include conditionally selecting the query from the queue, conditionally generating a second query plan for the query that can be used to execute the query in the system memory in compliance with a system memory constraint, and conditionally executing the query in the system memory.
Abstract:
The present technology relates to managing data caching in processing nodes of a massively parallel processing (MPP) database system. A directory is maintained that includes a list and a storage location of the data pages in the MPP database system. Memory usage is monitored in processing nodes by exchanging memory usage information with each other. Each of the processing nodes manages a list and a corresponding amount of available memory in each of the processing nodes based on the memory usage information. Data pages are read from a memory of the processing nodes in response to receiving a request to fetch the data pages, and a remote memory manager is queried for available memory in each of the processing nodes in response to receiving the request. The data pages are distributed to the memory of the processing nodes having sufficient space available for storage during data processing.
Abstract:
Dynamically re-allocating tasks and/or memory quotas amongst work agents in symmetric multiprocessing (SMP) systems can significantly mitigate delays and inefficiencies associated with data skew. For example, unfinished tasks can be reallocated from a busy work agent to an idle work agent upon determining that the idle work agent has finished processing its originally assigned set of tasks. Alternatively, a portion of a memory quota assigned to an idle work agent can be reallocated to a busy work agent for use in processing the remaining tasks. Memory quotas can be re-assigned by releasing the memory quota back into a memory pool once the idle work agent has finished processing its originally assigned tasks, and then reallocating some or all of the memory quota to the busy work agent.