Abstract:
System and method for hybrid distribution mode in massively parallel processing (MPP) database preventing storage imbalance issues caused by data skew. Key values of the database are identified as outliers if records of those keys cause database skew. In hybrid mode, records having the outlier key values are distributed using a random distribution scheme. Other records are distributed using a hash distribution scheme. A threshold skew amount is configurable for the system. Record lookups, insertions, deletions, and updates are processed according to a query plan optimized for the distribution mode of the records referenced in a database query.
Abstract:
The disclosure relates to technology for estimating a number of samples satisfying a database query. One or more subsets from a sample dataset of a collection of all data are randomly drawn. The one or more subsets are queried to determine a number of cardinalities as training data. A prediction model based on the training data is then trained using machine learning or statistical methods, and a sample size satisfying the database query of the collection of all data is estimated using the trained prediction model.
Abstract:
A computer-implemented method and system at a network switch provides using one or more processors to perform a pre-defined database function on query data contained in data messages received at the network switch, with the performing producing result data, and wherein the pre-defined database function is performed on the query data in a first mode of operation to a state of full completion, generating complete result data and no skipped query data, and in a second mode of operation to a state of partial completion, generating partially complete result data and skipped query data. Further, the method and system performing one or more network switch functions to route the complete result data, and/or route the partially complete result data and skipped query data, to one or more destination nodes. In addition, an application programming interface (API) is used to define the database function.
Abstract:
In one embodiment, a method includes determining a number of initial servers in a massively parallel processing (MPP) database cluster and determining an initial bucket configuration of the MPP database cluster, where the initial bucket configuration has a number of initial buckets. The method also includes adding a number of additional servers to the MPP database cluster to produce a number of updated servers, where the updated servers include the initial servers and the additional servers and creating an updated bucket configuration in accordance with the number of initial servers, the initial bucket configuration, and the number of additional servers, where the updated bucket configuration has a number of updated buckets. Additionally, the method includes redistributing data of the MPP cluster in accordance with the updated bucket configuration.
Abstract:
System and method embodiments are provided for using different storage formats for a primary database and its replicas in a database managed replication (DMR) system. As such, the advantages of both formats can be combined with suitable design complexity and implementation. In an embodiment, data is arranged in a sequence of rows and stored in a first storage format at the primary database. The data arranged in the sequence of rows is also stored in a second storage format at the replica database. The sequence of rows is determined according to the first storage format or the second storage format. The first storage format is a row store (RS) and the second storage format is a column store (CS), or vice versa. In an embodiment, the sequence of rows is determined to improve compression efficiency at the CS.
Abstract:
System and method embodiments are provided for adaptive vector size selection for vectorized query execution. The adaptive vector size selection is implemented in two stages. In a query planning stage, a suitable vector size is estimated for a query by a query planner. The planning stage includes analyzing a query plan tree, segmenting the tree into different segments, and assigning to the query execution plan an initial vector size to each segment. In a subsequent query execution stage, an execution engine monitors hardware performance indicators, and adjusts the vector size according to the monitored hardware performance indicators. Adjusting the vector size includes trying different vector sizes and observing related processor counters to increase or decrease the vector size, wherein the vector size is increased to improve hardware performance according to the processor counters, and wherein the vector size is decreased when the processor counters indicate a decrease in hardware performance.
Abstract:
In one embodiment, a method includes determining a number of initial servers in a massively parallel processing (MPP) database cluster and determining an initial bucket configuration of the MPP database cluster, where the initial bucket configuration has a number of initial buckets. The method also includes adding a number of additional servers to the MPP database cluster to produce a number of updated servers, where the updated servers include the initial servers and the additional servers and creating an updated bucket configuration in accordance with the number of initial servers, the initial bucket configuration, and the number of additional servers, where the updated bucket configuration has a number of updated buckets. Additionally, the method includes redistributing data of the MPP cluster in accordance with the updated bucket configuration.
Abstract:
In one embodiment, a method for managing database resources includes selecting a first query from a queue of queries and transmitting, by a global resource manager to a portion of a plurality of data nodes, a plurality of reserve resource messages. The method also includes receiving, by the global resource manager from the portion of the plurality of data nodes, a plurality of acknowledgement messages and transmitting, by the global resource manager to a coordinator node, an execute query message when the plurality of acknowledgement messages are positive acknowledgements.
Abstract:
In one embodiment, a method for managing database resources includes selecting a first query from a queue of queries and transmitting, by a global resource manager to a portion of a plurality of data nodes, a plurality of reserve resource messages. The method also includes receiving, by the global resource manager from the portion of the plurality of data nodes, a plurality of acknowledgement messages and transmitting, by the global resource manager to a coordinator node, an execute query message when the plurality of acknowledgement messages are positive acknowledgements.
Abstract:
Queries may be processed more efficiently in an massively parallel processing (MPP) database by locally optimizing the global execution plan. The global execution plan and a semantic tree may be provided to MPP data nodes by an MPP coordinator. The MPP data nodes may then use the global execution plan and the semantic tree to generate a local execution plan. Thereafter, the MPP data nodes may select either the global execution plan or the local execution plan is accordance with a cost evaluation.