WORKLOAD-AWARE DATA PLACEMENT ADVISOR FOR OLAP DATABASE SYSTEMS

    公开(公告)号:US20230297573A1

    公开(公告)日:2023-09-21

    申请号:US17699607

    申请日:2022-03-21

    Abstract: Embodiments implement a prediction-driven, rather than a trial-driven, approach to automatic data placement recommendations for partitioning data across multiple nodes in a database system. The system is configured to extract workload-specific features of a database workload running at a database system and dataset-specific features of a database running on the database system. The workload-specific features characterize utilization of the database workload. The dataset-specific features characterize how data is organized within the database. The system identifies a plurality of candidate keys for determining how to partition data stored in the database across nodes. Based at least in part on the workload-specific features, the dataset specific features, and the plurality of candidate keys, a set of candidate key combinations for partitioning data is generated. Using a machine learning model, determine a particular candidate key combination that optimizes query execution performance benefit based on the workload-specific features and the dataset specific features. Generate data placement commands to allocate the database tables across the nodes.

    DYNAMIC GROUPING OF IN-MEMORY DATA PROCESSING OPERATIONS

    公开(公告)号:US20180357331A1

    公开(公告)日:2018-12-13

    申请号:US15616777

    申请日:2017-06-07

    CPC classification number: G06F16/90335 G06F9/48

    Abstract: Techniques are described herein for grouping of operations in local memory of a processing unit. The techniques involve adding a first operation for a first leaf operator of a query execution plan to a first pipelined group. The query execution plan includes a set of leaf operators and a set of non-leaf operators. Each leaf operator of the set of one or more leaf operators has a respective parent non-leaf operator and each non-leaf operator has one or more child operators from among the set of leaf operators or others of the set of non-leaf operators. The techniques further involve determining a memory requirement of executing the first operation for the first leaf operator and executing a second operation for the respective parent non-leaf operator of the first leaf operator. The output of the first operation is input to the second operation. The techniques further involve determining whether the memory requirement is satisfied by an amount of local memory. If it is determined that the memory requirement is satisfied by the amount of local memory the second operation for the respective parent non-leaf operator is added to the first pipelined group. The techniques further involve assigning the first pipelined group to a first thread and the first thread executing the first pipelined group. Executing the first pipelined group involves: storing first output of the first operation in the local memory of the first thread; using the first output as input for the second operation; storing second output of the second operation in the local memory; and moving second output from the local memory to a tier of memory different than the local memory relative to the first thread.

    Prediction of buffer pool size for transaction processing workloads

    公开(公告)号:US11868261B2

    公开(公告)日:2024-01-09

    申请号:US17381072

    申请日:2021-07-20

    CPC classification number: G06F12/0842 G06F16/24552 G06F2212/6022

    Abstract: Techniques are described herein for prediction of an buffer pool size (BPS). Before performing BPS prediction, gathered data are used to determine whether a target workload is in a steady state. Historical utilization data gathered while the workload is in a steady state are used to predict object-specific BPS components for database objects, accessed by the target workload, that are identified for BPS analysis based on shares of the total disk I/O requests, for the workload, that are attributed to the respective objects. Preference of analysis is given to objects that are associated with larger shares of disk I/O activity. An object-specific BPS component is determined based on a coverage function that returns a percentage of the database object size (on disk) that should be available in the buffer pool for that database object. The percentage is determined using either a heuristic-based or a machine learning-based approach.

    Automated configuration parameter tuning for database performance

    公开(公告)号:US11567937B2

    公开(公告)日:2023-01-31

    申请号:US17318972

    申请日:2021-05-12

    Abstract: Embodiments implement a prediction-driven, rather than a trial-driven, approach to automate database configuration parameter tuning for a database workload. This approach uses machine learning (ML) models to test performance metrics resulting from application of particular database parameters to a database workload, and does not require live trials on the DBMS managing the workload. Specifically, automatic configuration (AC) ML models are trained, using a training corpus that includes information from workloads being run by DBMSs, to predict performance metrics based on workload features and configuration parameter values. The trained AC-ML models predict performance metrics resulting from applying particular configuration parameter values to a given database workload being automatically tuned. Based on correlating changes to configuration parameter values with changes in predicted performance metrics, an optimization algorithm is used to converge to an optimal set of configuration parameters. The optimal set of configuration parameter values is automatically applied for the given workload.

    EFFICIENT ADJUSTMENT OF SPIN-LOCKING PARAMETER VALUES

    公开(公告)号:US20220107933A1

    公开(公告)日:2022-04-07

    申请号:US17060999

    申请日:2020-10-01

    Abstract: Systems and methods for adjusting parameters for a spin-lock implementation of concurrency control are described herein. In an embodiment, a system continuously retrieves, from a resource management system, one or more state values defining a state of the resource management system. Based on the one or more state values, the system determines that the resource management system has reached a steady state and, in response adjusts a plurality of parameters for spin-locking performed by said resource management system to identify optimal values for the plurality of parameters. After adjusting the plurality of parameters, the system detects, based on one or more current state values, a workload change in the resource management system and, in response, readjusts the plurality of parameters for spin-locking performed by said resource management system to identify new optimal values for the parameters.

    Automated configuration parameter tuning for database performance

    公开(公告)号:US11061902B2

    公开(公告)日:2021-07-13

    申请号:US16298837

    申请日:2019-03-11

    Abstract: Embodiments implement a prediction-driven, rather than a trial-driven, approach to automate database configuration parameter tuning for a database workload. This approach uses machine learning (ML) models to test performance metrics resulting from application of particular database parameters to a database workload, and does not require live trials on the DBMS managing the workload. Specifically, automatic configuration (AC) ML models are trained, using a training corpus that includes information from workloads being run by DBMSs, to predict performance metrics based on workload features and configuration parameter values. The trained AC-ML models predict performance metrics resulting from applying particular configuration parameter values to a given database workload being automatically tuned. Based on correlating changes to configuration parameter values with changes in predicted performance metrics, an optimization algorithm is used to converge to an optimal set of configuration parameters. The optimal set of configuration parameter values is automatically applied for the given workload.

    Sparse dictionary tree
    17.
    发明授权

    公开(公告)号:US11023430B2

    公开(公告)日:2021-06-01

    申请号:US15819891

    申请日:2017-11-21

    Abstract: Techniques related to a sparse dictionary tree are disclosed. In some embodiments, computing device(s) execute instructions, which are stored on non-transitory storage media, for performing a method. The method comprises storing an encoding dictionary as a token-ordered tree comprising a first node and a second node, which are adjacent nodes. The token-ordered tree maps ordered tokens to ordered codes. The ordered tokens include a first token and a second token. The ordered codes include a first code and a second code, which are non-consecutive codes. The first node maps the first token to the first code. The second node maps the second token to the second code. The encoding dictionary is updated based on inserting a third node between the first node and the second node. The third node maps a third token to a third code that is greater than the first code and less than the second code.

    Automated provisioning for database performance

    公开(公告)号:US11782926B2

    公开(公告)日:2023-10-10

    申请号:US17573897

    申请日:2022-01-12

    CPC classification number: G06F16/24545 G06F16/217 G06N20/00 G06N20/20

    Abstract: Embodiments utilize trained query performance machine learning (QP-ML) models to predict an optimal compute node cluster size for a given in-memory workload. The QP-ML models include models that predict query task runtimes at various compute node cardinalities, and models that predict network communication time between nodes of the cluster. Embodiments also utilize an analytical model to predict overlap between predicted task runtimes and predicted network communication times. Based on this data, an optimal cluster size is selected for the workload. Embodiments further utilize trained data capacity machine learning (DC-ML) models to predict a minimum number of compute nodes needed to run a workload. The DC-ML models include models that predict the size of the workload dataset in a target data encoding, models that predict the amount of memory needed to run the queries in the workload, and models that predict the memory needed to accommodate changes to the dataset.

    Efficient adjustment of spin-locking parameter values

    公开(公告)号:US11379456B2

    公开(公告)日:2022-07-05

    申请号:US17060999

    申请日:2020-10-01

    Abstract: Systems and methods for adjusting parameters for a spin-lock implementation of concurrency control are described herein. In an embodiment, a system continuously retrieves, from a resource management system, one or more state values defining a state of the resource management system. Based on the one or more state values, the system determines that the resource management system has reached a steady state and, in response adjusts a plurality of parameters for spin-locking performed by said resource management system to identify optimal values for the plurality of parameters. After adjusting the plurality of parameters, the system detects, based on one or more current state values, a workload change in the resource management system and, in response, readjusts the plurality of parameters for spin-locking performed by said resource management system to identify new optimal values for the parameters.

    AUTOMATED PROVISIONING FOR DATABASE PERFORMANCE

    公开(公告)号:US20220138199A1

    公开(公告)日:2022-05-05

    申请号:US17573897

    申请日:2022-01-12

    Abstract: Embodiments utilize trained query performance machine learning (QP-ML) models to predict an optimal compute node cluster size for a given in-memory workload. The QP-ML models include models that predict query task runtimes at various compute node cardinalities, and models that predict network communication time between nodes of the cluster. Embodiments also utilize an analytical model to predict overlap between predicted task runtimes and predicted network communication times. Based on this data, an optimal cluster size is selected for the workload. Embodiments further utilize trained data capacity machine learning (DC-ML) models to predict a minimum number of compute nodes needed to run a workload. The DC-ML models include models that predict the size of the workload dataset in a target data encoding, models that predict the amount of memory needed to run the queries in the workload, and models that predict the memory needed to accommodate changes to the dataset.

Patent Agency Ranking