-
公开(公告)号:US11561923B2
公开(公告)日:2023-01-24
申请号:US17221580
申请日:2021-04-02
Applicant: Oracle International Corporation
Inventor: Navaneeth P. Jamadagni , Ji Eun Jang , Anatoly Yakovlev , Vincent Lee , Guanghua Shu , Mark Semmelmeyer
Abstract: An apparatus includes a first device having a clock signal and configured to communicate, via a data bus, with a second device configured to assert a data strobe signal and a plurality of data bit signals on the data bus. The first device may include a control circuit configured, during a training phase, to determine relative timing between the clock signal, the plurality of data bit signals, and the data strobe signal. The first device may determine, using a first set of sampling operations, a first timing relationship of the plurality of data bit signals relative to the data strobe signal, and determine, using a second set of sampling operations, a second timing relationship of the plurality of data bit signals and the data strobe signal relative to the clock signal. During an operational phase, the control circuit may be configured to use delays based on the first and second timing relationships to sample data from the second device on the data bus.
-
公开(公告)号:US11429895B2
公开(公告)日:2022-08-30
申请号:US16384588
申请日:2019-04-15
Applicant: Oracle International Corporation
Inventor: Anatoly Yakovlev , Venkatanathan Varadarajan , Sandeep Agrawal , Hesam Fathi Moghadam , Sam Idicula , Nipun Agarwal
IPC: G06N20/00
Abstract: Herein are techniques for exploring hyperparameters of a machine learning model (MLM) and to train a regressor to predict a time needed to train the MLM based on a hyperparameter configuration and a dataset. In an embodiment that is deployed in production inferencing mode, for each landmark configuration, each containing values for hyperparameters of a MLM, a computer configures the MLM based on the landmark configuration and measures time spent training the MLM on a dataset. An already trained regressor predicts time needed to train the MLM based on a proposed configuration of the MLM, dataset meta-feature values, and training durations and hyperparameter values of landmark configurations of the MLM. When instead in training mode, a regressor in training ingests a training corpus of MLM performance history to learn, by reinforcement, to predict a training time for the MLM for new datasets and/or new hyperparameter configurations.
-
23.
公开(公告)号:US20220138504A1
公开(公告)日:2022-05-05
申请号:US17083536
申请日:2020-10-29
Applicant: Oracle International Corporation
Inventor: Hesam Fathi Moghadam , Anatoly Yakovlev , Sandeep Agrawal , Venkatanathan Varadarajan , Robert Hopkins , Matteo Casserini , Milos Vasic , Sanjay Jinturkar , Nipun Agarwal
Abstract: In an embodiment based on computer(s), an ML model is trained to detect outliers. The ML model calculates anomaly scores that include a respective anomaly score for each item in a validation dataset. The anomaly scores are automatically organized by sorting and/or clustering. Based on the organized anomaly scores, a separation is measured that indicates fitness of the ML model. In an embodiment, a computer performs two-clustering of anomaly scores into a first organization that consists of a first normal cluster of anomaly scores and a first anomaly cluster of anomaly scores. The computer performs three-clustering of the same anomaly scores into a second organization that consists of a second normal cluster of anomaly scores, a second anomaly cluster of anomaly scores, and a middle cluster of anomaly scores. A distribution difference between the first organization and the second organization is measured. An ML model is processed based on the distribution difference.
-
公开(公告)号:US20210390466A1
公开(公告)日:2021-12-16
申请号:US17086204
申请日:2020-10-30
Applicant: Oracle International Corporation
Inventor: Venkatanathan Varadarajan , Sandeep R. Agrawal , Hesam Fathi Moghadam , Anatoly Yakovlev , Ali Moharrer , Jingxiao Cai , Sanjay Jinturkar , Nipun Agarwal , Sam Idicula , Nikan Chavoshi
Abstract: A proxy-based automatic non-iterative machine learning (PANI-ML) pipeline is described, which predicts machine learning model configuration performance and outputs an automatically-configured machine learning model for a target training dataset. Techniques described herein use one or more proxy models—which implement a variety of machine learning algorithms and are pre-configured with tuned hyperparameters—to estimate relative performance of machine learning model configuration parameters at various stages of the PANI-ML pipeline. The PANI-ML pipeline implements a radically new approach of rapidly narrowing the search space for machine learning model configuration parameters by performing algorithm selection followed by algorithm-specific adaptive data reduction (i.e., row- and/or feature-wise dataset sampling), and then hyperparameter tuning. Furthermore, because of the one-pass nature of the PANI-ML pipeline and because each stage of the pipeline has convergence criteria by design, the whole PANI-ML pipeline has a novel convergence property that stops the configuration search after one pass.
-
-
-