-
公开(公告)号:US12248444B1
公开(公告)日:2025-03-11
申请号:US18539928
申请日:2023-12-14
Applicant: Oracle International Corporation
Inventor: Fotis Savva , Farhan Tauheed , Marc Jolles , Onur Kocberber , Seema Sundara , Nipun Agarwal
IPC: G06F16/20 , G06F16/21 , G06F16/2455 , G06F16/28
Abstract: Auto-parallel-load techniques are provided for automatically loading database objects from an on-disk database system into an in-memory database system. The auto-parallel-load techniques involve a pipeline that includes several components. In one implementation, each of the pipeline components is configured to receive, extract information from, and add information to, a “state object”. One or more of the pipeline components include logic that is based on the output of a corresponding machine learning model. The machine learning models used by the pipeline components may be trained from training sets from which outliers have been excluded, and may be used as the basis for generating linear models that are used during runtime, to produce estimates that affect the parameters of the auto-parallel-load operation.
-
公开(公告)号:US12229135B2
公开(公告)日:2025-02-18
申请号:US17699607
申请日:2022-03-21
Applicant: Oracle International Corporation
Inventor: Urvashi Oswal , Jian Wen , Farhan Tauheed , Onur Kocberber , Seema Sundara , Nipun Agarwal
IPC: G06F16/2453 , G06F11/34 , G06F16/21 , G06F16/22 , G06F16/27
Abstract: Embodiments implement a prediction-driven, rather than a trial-driven, approach to automatic data placement recommendations for partitioning data across multiple nodes in a database system. The system is configured to extract workload-specific features of a database workload running at a database system and dataset-specific features of a database running on the database system. The workload-specific features characterize utilization of the database workload. The dataset-specific features characterize how data is organized within the database. The system identifies a plurality of candidate keys for determining how to partition data stored in the database across nodes. Based at least in part on the workload-specific features, the dataset specific features, and the plurality of candidate keys, a set of candidate key combinations for partitioning data is generated. Using a machine learning model, determine a particular candidate key combination that optimizes query execution performance benefit based on the workload-specific features and the dataset specific features. Generate data placement commands to allocate the database tables across the nodes.
-
公开(公告)号:US10366124B2
公开(公告)日:2019-07-30
申请号:US15616777
申请日:2017-06-07
Applicant: Oracle International Corporation
Inventor: Jian Wen , Sam Idicula , Nitin Kunal , Negar Koochakzadeh , Seema Sundara , Thomas Chang , Aarti Basant , Nipun Agarwal , Farhan Tauheed
IPC: G06F17/30 , G06F16/903 , G06F9/48
Abstract: Techniques are described herein for grouping of operations in local memory of a processing unit. The techniques involve adding a first operation for a first leaf operator of a query execution plan to a first pipelined group. The query execution plan includes a set of leaf operators and a set of non-leaf operators. Each leaf operator of the set of one or more leaf operators has a respective parent non-leaf operator and each non-leaf operator has one or more child operators from among the set of leaf operators or others of the set of non-leaf operators. The techniques further involve determining a memory requirement of executing the first operation for the first leaf operator and executing a second operation for the respective parent non-leaf operator of the first leaf operator. The output of the first operation is input to the second operation. The techniques further involve determining whether the memory requirement is satisfied by an amount of local memory. If it is determined that the memory requirement is satisfied by the amount of local memory the second operation for the respective parent non-leaf operator is added to the first pipelined group. The techniques further involve assigning the first pipelined group to a first thread and the first thread executing the first pipelined group. Executing the first pipelined group involves: storing first output of the first operation in the local memory of the first thread; using the first output as input for the second operation; storing second output of the second operation in the local memory; and moving second output from the local memory to a tier of memory different than the local memory relative to the first thread.
-
公开(公告)号:US20190205446A1
公开(公告)日:2019-07-04
申请号:US15861212
申请日:2018-01-03
Applicant: Oracle International Corporation
Inventor: Anantha Kiran Kandukuri , Seema Sundara , Sam Idicula , Pit Fender , Nitin Kunal , Sabina Petride , Georgios Giannikis , Nipun Agarwal
Abstract: Techniques related to distributed relational dictionaries are disclosed. In some embodiments, one or more non-transitory storage media store a sequence of instructions which, when executed by one or more computing devices, cause performance of a method. The method involves generating, by a query optimizer at a distributed database system (DDS), a query execution plan (QEP) for generating a code dictionary and a column of encoded database data. The QEP specifies a sequence of operations for generating the code dictionary. The code dictionary is a database table. The method further involves receiving, at the DDS, a column of unencoded database data from a data source that is external to the DDS. The DDS generates the code dictionary according to the QEP. Furthermore, based on joining the column of unencoded database data with the code dictionary, the DDS generates the column of encoded database data according to the QEP.
-
公开(公告)号:US20230022884A1
公开(公告)日:2023-01-26
申请号:US17381072
申请日:2021-07-20
Applicant: Oracle International Corporation
Inventor: Peyman Faizian , Mayur Bency , Onur Kocberber , Seema Sundara , Nipun Agarwal
IPC: G06F12/0842 , G06F16/22
Abstract: Techniques are described herein for prediction of an buffer pool size (BPS). Before performing BPS prediction, gathered data are used to determine whether a target workload is in a steady state. Historical utilization data gathered while the workload is in a steady state are used to predict object-specific BPS components for database objects, accessed by the target workload, that are identified for BPS analysis based on shares of the total disk I/O requests, for the workload, that are attributed to the respective objects. Preference of analysis is given to objects that are associated with larger shares of disk I/O activity. An object-specific BPS component is determined based on a coverage function that returns a percentage of the database object size (on disk) that should be available in the buffer pool for that database object. The percentage is determined using either a heuristic-based or a machine learning-based approach.
-
公开(公告)号:US11423022B2
公开(公告)日:2022-08-23
申请号:US16016966
申请日:2018-06-25
Applicant: Oracle International Corporation
Inventor: Jian Wen , Sam Idicula , Nitin Kunal , Farhan Tauheed , Seema Sundara , Nipun Agarwal , Indu Bhagat
IPC: G06F16/24 , G06F9/50 , G06F16/2453 , G06F16/22
Abstract: Techniques are described herein for building a framework for declarative query compilation using both rule-based and cost-based approaches for database management. The framework involves constructing and using: a set of rule-based properties tables that contain optimization parameters for both logical and physical optimization, a recursive algorithm to form candidate physical query plans that is based on the rule based tables, and a cost model for estimating the cost of a generated physical query plan that is used with the rule based properties tables to prune inferior query plans.
-
公开(公告)号:US11256698B2
公开(公告)日:2022-02-22
申请号:US16382085
申请日:2019-04-11
Applicant: Oracle International Corporation
Inventor: Sam Idicula , Tomas Karnagel , Jian Wen , Seema Sundara , Nipun Agarwal , Mayur Bency
IPC: G06F16/2453 , G06F16/21 , G06N20/00 , G06N20/20
Abstract: Embodiments utilize trained query performance machine learning (QP-ML) models to predict an optimal compute node cluster size for a given in-memory workload. The QP-ML models include models that predict query task runtimes at various compute node cardinalities, and models that predict network communication time between nodes of the cluster. Embodiments also utilize an analytical model to predict overlap between predicted task runtimes and predicted network communication times. Based on this data, an optimal cluster size is selected for the workload. Embodiments further utilize trained data capacity machine learning (DC-ML) models to predict a minimum number of compute nodes needed to run a workload. The DC-ML models include models that predict the size of the workload dataset in a target data encoding, models that predict the amount of memory needed to run the queries in the workload, and models that predict the memory needed to accommodate changes to the dataset.
-
公开(公告)号:US11169995B2
公开(公告)日:2021-11-09
申请号:US15819193
申请日:2017-11-21
Applicant: Oracle International Corporation
Inventor: Pit Fender , Seema Sundara , Benjamin Schlegel , Nipun Agarwal
IPC: G06F7/00 , G06F16/2453 , G06F16/21 , G06F16/22 , G06F16/2455
Abstract: Techniques related to relational dictionaries are disclosed. In some embodiments, one or more non-transitory storage media store a sequence of instructions which, when executed by one or more computing devices, cause performance of a method. The method involves storing a code dictionary comprising a set of tuples. The code dictionary is a database table defined by a database dictionary and comprises columns that are each defined by the database dictionary. The set of tuples maps a set of codes to a set of tokens. The set of tokens are stored in a column of unencoded database data. The method further involves generating encoded database data based on joining the unencoded database data with the set of tuples. Furthermore, the method involves generating decoding database data based on joining the encoded database data with the set of tuples.
-
公开(公告)号:US20210263934A1
公开(公告)日:2021-08-26
申请号:US17318972
申请日:2021-05-12
Applicant: Oracle International Corporation
Inventor: Sam Idicula , Tomas Karnagel , Jian Wen , Seema Sundara , Nipun Agarwal , Mayur Bency
IPC: G06F16/2453 , G06N20/00 , G06F16/21 , G06N20/20
Abstract: Embodiments implement a prediction-driven, rather than a trial-driven, approach to automate database configuration parameter tuning for a database workload. This approach uses machine learning (ML) models to test performance metrics resulting from application of particular database parameters to a database workload, and does not require live trials on the DBMS managing the workload. Specifically, automatic configuration (AC) ML models are trained, using a training corpus that includes information from workloads being run by DBMSs, to predict performance metrics based on workload features and configuration parameter values. The trained AC-ML models predict performance metrics resulting from applying particular configuration parameter values to a given database workload being automatically tuned. Based on correlating changes to configuration parameter values with changes in predicted performance metrics, an optimization algorithm is used to converge to an optimal set of configuration parameters. The optimal set of configuration parameter values is automatically applied for the given workload.
-
公开(公告)号:US11907250B2
公开(公告)日:2024-02-20
申请号:US17871092
申请日:2022-07-22
Applicant: ORACLE INTERNATIONAL CORPORATION
Inventor: Urvashi Oswal , Marc Jolles , Onur Kocberber , Seema Sundara , Nipun Agarwal
IPC: G06F16/24 , G06F16/25 , G06F16/21 , G06F11/34 , G06F16/2458
CPC classification number: G06F16/258 , G06F11/3409 , G06F16/21 , G06F16/2462
Abstract: Techniques are described for executing machine learning models trained for specific operators with feature values that are based on the actual execution of a workload set. The machine learning models generate an estimate of benefit gain/cost for executing operations on data portions in the alternative encoding format. Such data potions may be sorted based on the estimated benefit, in an embodiment. Using cost estimation machine learning models for memory space, the data portions with the most benefits that comply with the existing memory space constraints are recommended and/or are automatically encoded into the alternative encoding format.
-
-
-
-
-
-
-
-
-