-
21.
公开(公告)号:US12072953B2
公开(公告)日:2024-08-27
申请号:US17349817
申请日:2021-06-16
Applicant: Oracle International Corporation
Inventor: Gaurav Chadha , Sam Idicula , Sandeep Agrawal , Nipun Agarwal
Abstract: Techniques are described herein for performing efficient matrix multiplication in architectures with scratchpad memories or associative caches using asymmetric allocation of space for the different matrices. The system receives a left matrix and a right matrix. In an embodiment, the system allocates, in a scratchpad memory, asymmetric memory space for tiles for each of the two matrices as well as a dot product matrix. The system proceeds with then performing dot product matrix multiplication involving the tiles of the left and the right matrices, storing resulting dot product values in corresponding allocated dot product matrix tiles. The system then proceeds to write the stored dot product values from the scratchpad memory into main memory.
-
公开(公告)号:US11868854B2
公开(公告)日:2024-01-09
申请号:US16426530
申请日:2019-05-30
Applicant: Oracle International Corporation
Inventor: Ali Moharrer , Venkatanathan Varadarajan , Sam Idicula , Sandeep Agrawal , Nipun Agarwal
Abstract: Herein are techniques that train regressor(s) to predict how effective would a machine learning model (MLM) be if trained with new hyperparameters and/or dataset. In an embodiment, for each training dataset, a computer derives, from the dataset, values for dataset metafeatures. The computer performs, for each hyperparameters configuration (HC) of a MLM, including landmark HCs: configuring the MLM based on the HC, training the MLM based on the dataset, and obtaining an empirical quality score that indicates how effective was said training the MLM when configured with the HC. A performance tuple is generated that contains: the HC, the values for the dataset metafeatures, the empirical quality score and, for each landmark configuration, the empirical quality score of the landmark configuration and/or the landmark configuration itself. Based on the performance tuples, a regressor is trained to predict an estimated quality score based on a given dataset and a given HC.
-
公开(公告)号:US11720822B2
公开(公告)日:2023-08-08
申请号:US17499945
申请日:2021-10-13
Applicant: Oracle International Corporation
Inventor: Venkatanathan Varadarajan , Sam Idicula , Sandeep Agrawal , Nipun Agarwal
Abstract: Herein, horizontally scalable techniques efficiently configure machine learning algorithms for optimal accuracy and without informed inputs. In an embodiment, for each particular hyperparameter, and for each epoch, a computer processes the particular hyperparameter. An epoch explores one hyperparameter based on hyperparameter tuples. A respective score is calculated from each tuple. The tuple contains a distinct combination of values, each of which is contained in a value range of a distinct hyperparameter. All values of a tuple that belong to the particular hyperparameter are distinct. All values of a tuple that belong to other hyperparameters are held constant. The value range of the particular hyperparameter is narrowed based on an intersection point of a first line based on the scores and a second line based on the scores. A machine learning algorithm is optimally configured from repeatedly narrowed value ranges of hyperparameters. The configured algorithm is invoked to obtain a result.
-
24.
公开(公告)号:US11620568B2
公开(公告)日:2023-04-04
申请号:US16388830
申请日:2019-04-18
Applicant: Oracle International Corporation
Inventor: Hesam Fathi Moghadam , Sandeep Agrawal , Venkatanathan Varadarajan , Anatoly Yakovlev , Sam Idicula , Nipun Agarwal
Abstract: Techniques are provided for selection of machine learning algorithms based on performance predictions by using hyperparameter predictors. In an embodiment, for each mini-machine learning model (MML model), a respective hyperparameter predictor set that predicts a respective set of hyperparameter settings for a data set is trained. Each MML model represents a respective reference machine learning model (RML model). Data set samples are generated from the data set. Meta-feature sets are generated, each meta-feature set describing a respective data set sample. A respective target set of hyperparameter settings are generated for said each MML model using a hypertuning algorithm. The meta-feature sets and the respective target set of hyperparameter settings are used to train the respective hyperparameter predictor set. Each hyperparameter predictor set is used during training and inference to improve the accuracy of automatically selecting a RML model per data set.
-
25.
公开(公告)号:US11562178B2
公开(公告)日:2023-01-24
申请号:US16718164
申请日:2019-12-17
Applicant: Oracle International Corporation
Inventor: Jingxiao Cai , Sandeep Agrawal , Sam Idicula , Venkatanathan Varadarajan , Anatoly Yakovlev , Nipun Agarwal
Abstract: According to an embodiment, a method includes generating a first dataset sample from a dataset, calculating a first validation score for the first dataset sample and a machine learning model, and determining whether a difference in validation score between the first validation score and a second validation score satisfies a first criteria. If the difference in validation score does not satisfy the first criteria, the method includes generating a second dataset sample from the dataset. If the difference in validation score does satisfy the first criteria, the method includes updating a convergence value and determining whether the updated convergence value satisfies a second criteria. If the updated convergence value satisfies the second criteria, the method includes returning the first dataset sample. If the updated convergence value does not satisfy the second criteria, the method includes generating the second dataset sample from the dataset.
-
公开(公告)号:US11544630B2
公开(公告)日:2023-01-03
申请号:US16417145
申请日:2019-05-20
Applicant: ORACLE INTERNATIONAL CORPORATION
Inventor: Tomas Karnagel , Sam Idicula , Nipun Agarwal
Abstract: The present invention relates to dimensionality reduction for machine learning (ML) models. Herein are techniques that individually rank features and combine features based on their rank to achieve an optimal combination of features that may accelerate training and/or inferencing, prevent overfitting, and/or provide insights into somewhat mysterious datasets. In an embodiment, a computer calculates, for each feature of a training dataset, a relevance score based on: a relevance scoring function, and statistics of values, of the feature, that occur in the training dataset. A rank based on relevance scores of the features is calculated for each feature. A sequence of distinct subsets of the features, based on the ranks of the features, is generated. For each distinct subset of the sequence of distinct feature subsets, a fitness score is generated based on training a machine learning (ML) model that is configured for the distinct subset.
-
27.
公开(公告)号:US11544494B2
公开(公告)日:2023-01-03
申请号:US15884163
申请日:2018-01-30
Applicant: Oracle International Corporation
Inventor: Sandeep Agrawal , Sam Idicula , Venkatanathan Varadarajan , Nipun Agarwal
Abstract: Techniques are provided for selection of machine learning algorithms based on performance predictions by trained algorithm-specific regressors. In an embodiment, a computer derives meta-feature values from an inference dataset by, for each meta-feature, deriving a respective meta-feature value from the inference dataset. For each trainable algorithm and each regression meta-model that is respectively associated with the algorithm, a respective score is calculated by invoking the meta-model based on at least one of: a respective subset of meta-feature values, and/or hyperparameter values of a respective subset of hyperparameters of the algorithm. The algorithm(s) are selected based on the respective scores. Based on the inference dataset, the selected algorithm(s) may be invoked to obtain a result. In an embodiment, the trained regressors are distinctly configured artificial neural networks. In an embodiment, the trained regressors are contained within algorithm-specific ensembles. Techniques are also provided for optimal training of regressors and/or ensembles.
-
28.
公开(公告)号:US11531915B2
公开(公告)日:2022-12-20
申请号:US16359256
申请日:2019-03-20
Applicant: Oracle International Corporation
Inventor: Tayler Hetherington , Zahra Zohrevand , Onur Kocberber , Karoon Rashedi Nia , Sam Idicula , Nipun Agarwal
Abstract: Herein are techniques to generate candidate rulesets for machine learning (ML) explainability (MLX) for black-box ML models. In an embodiment, an ML model generates classifications that each associates a distinct example with a label. A decision tree that, based on the classifications, contains tree nodes is received or generated. Each node contains label(s), a condition that identifies a feature of examples, and a split value for the feature. When a node has child nodes, the feature and the split value that are identified by the condition of the node are set to maximize information gain of the child nodes. Candidate rules are generated by traversing the tree. Each rule is built from a combination of nodes in a tree traversal path. Each rule contains a condition of at least one node and is assigned to a rule level. Candidate rules are subsequently optimized into an optimal ruleset for actual use.
-
公开(公告)号:US11429895B2
公开(公告)日:2022-08-30
申请号:US16384588
申请日:2019-04-15
Applicant: Oracle International Corporation
Inventor: Anatoly Yakovlev , Venkatanathan Varadarajan , Sandeep Agrawal , Hesam Fathi Moghadam , Sam Idicula , Nipun Agarwal
IPC: G06N20/00
Abstract: Herein are techniques for exploring hyperparameters of a machine learning model (MLM) and to train a regressor to predict a time needed to train the MLM based on a hyperparameter configuration and a dataset. In an embodiment that is deployed in production inferencing mode, for each landmark configuration, each containing values for hyperparameters of a MLM, a computer configures the MLM based on the landmark configuration and measures time spent training the MLM on a dataset. An already trained regressor predicts time needed to train the MLM based on a proposed configuration of the MLM, dataset meta-feature values, and training durations and hyperparameter values of landmark configurations of the MLM. When instead in training mode, a regressor in training ingests a training corpus of MLM performance history to learn, by reinforcement, to predict a training time for the MLM for new datasets and/or new hyperparameter configurations.
-
公开(公告)号:US20210390466A1
公开(公告)日:2021-12-16
申请号:US17086204
申请日:2020-10-30
Applicant: Oracle International Corporation
Inventor: Venkatanathan Varadarajan , Sandeep R. Agrawal , Hesam Fathi Moghadam , Anatoly Yakovlev , Ali Moharrer , Jingxiao Cai , Sanjay Jinturkar , Nipun Agarwal , Sam Idicula , Nikan Chavoshi
Abstract: A proxy-based automatic non-iterative machine learning (PANI-ML) pipeline is described, which predicts machine learning model configuration performance and outputs an automatically-configured machine learning model for a target training dataset. Techniques described herein use one or more proxy models—which implement a variety of machine learning algorithms and are pre-configured with tuned hyperparameters—to estimate relative performance of machine learning model configuration parameters at various stages of the PANI-ML pipeline. The PANI-ML pipeline implements a radically new approach of rapidly narrowing the search space for machine learning model configuration parameters by performing algorithm selection followed by algorithm-specific adaptive data reduction (i.e., row- and/or feature-wise dataset sampling), and then hyperparameter tuning. Furthermore, because of the one-pass nature of the PANI-ML pipeline and because each stage of the pipeline has convergence criteria by design, the whole PANI-ML pipeline has a novel convergence property that stops the configuration search after one pass.
-
-
-
-
-
-
-
-
-