-
公开(公告)号:WO2022103993A1
公开(公告)日:2022-05-19
申请号:PCT/US2021/059030
申请日:2021-11-11
Applicant: GOOGLE LLC
Inventor: SOHN, Kihyuk , LI, Chun-Liang , YOON, Jinsung , PFISTER, Tomas, Jon
IPC: G06V10/82 , G06V10/774
Abstract: A method (500) for training a machine learning model (150) includes obtaining a set of training samples (112). For each training sample in the set of training samples, during each of one or more training iterations, the method includes cropping the training sample to generate a first cropped image (140 A), cropping the training sample to generate a second cropped image (140B) that is different than the first cropped image, and duplicating a first portion (210) of the second cropped image. The method also includes overlaying the duplicated first portion of the second cropped image on a second portion (220) of the second cropped image to form an augmented second cropped image (140BA). The first portion is different than the second portion. The method also includes training the machine learning model with the first cropped image and the augmented second cropped image.
-
公开(公告)号:WO2021061861A2
公开(公告)日:2021-04-01
申请号:PCT/US2020/052326
申请日:2020-09-23
Applicant: GOOGLE LLC
Inventor: ARIK, Sercan, Omer , YOON, Jinsung , PFISTER, Tomas, Jon
Abstract: A method (300) for training a locally interpretable model (190) includes obtaining a set of training samples (130), and training a black-box model (120) using the set of training samples. The method also includes generating, using the trained black-box model and the set of training samples, a set of auxiliary training samples (140) and training a baseline interpretable model (150) using the set of auxiliary training samples. The method also includes training, using the set of auxiliary training samples and baseline interpretable model an instance-wise weight estimator model (160). For each auxiliary training sample, the method also includes determining, using the trained instance-wise weight estimator model, a selection probability (170) for the auxiliary training sample. The method also includes selecting, based on the selection probabilities, a subset of auxiliary training samples (140S) and training the locally interpretable model using the subset of auxiliary training samples.
-
公开(公告)号:WO2023086909A1
公开(公告)日:2023-05-19
申请号:PCT/US2022/079674
申请日:2022-11-10
Applicant: GOOGLE LLC
Inventor: SOHN, Kihyuk , YOON, Jinsung , LI, Chun-Liang , PFISTER, Tomas Jon , LEE, Chen-Yu
IPC: G06F18/214 , G06N3/0895 , G06V10/50 , G06V10/762 , G06V10/82 , G06V10/74 , G06V10/44
Abstract: A computer-implemented method (500) includes receiving an anomaly clustering request (20) that requests data processing hardware (144) to assign each image (152) of a plurality of images into one of a plurality of groups (302). The method also includes obtaining a plurality of images. For each respective image, the method includes extracting a respective set of patch embeddings (212) from the respective image, determining a distance (212) between the respective set of patch embeddings and each other set of patch embeddings, and assigning the respective image into one of the plurality of groups using the distances between the respective set of patch embeddings and each other set of patch embeddings.
-
公开(公告)号:WO2021055887A1
公开(公告)日:2021-03-25
申请号:PCT/US2020/051678
申请日:2020-09-19
Applicant: GOOGLE LLC
Inventor: ARIK, Sercan, Omer , YOON, Jinsung , PFISTER, Tomas, Jon
Abstract: A method (500) includes obtaining a set of training samples (102). During each of a plurality of training iterations, the method includes sampling a batch of training samples from the set of training samples. The method includes, for each training sample, determining, using a data value estimator (120), a selection probability (106). The selection probability for the training sample is based on estimator parameter values (122) of the data value estimator The method also includes selecting, based on the selection probabilities of each training sample, a subset of training samples from the batch of training samples, and determining, using a predictor model (142) with tire subset of training samples, performance measurements (144). 'The method also includes adjusting model parameter values (143) of the predictor model based on the performance measurements, and updating the estimator parameter values of the data value estimator based on the performance measurements.
-
公开(公告)号:WO2022251462A1
公开(公告)日:2022-12-01
申请号:PCT/US2022/031087
申请日:2022-05-26
Applicant: GOOGLE LLC
Inventor: LI, Chun-liang , YOON, Jinsung , SOHN, Kihyuk , ARIK, Sercan, Omer
Abstract: Aspects of the disclosure provide for methods, systems, and apparatus, including computer- readable storage media, for anomaly detection using a machine learning framework trained entirely on unlabeled training data including both anomalous and non-anomalous training examples. A self-supervised one-class classifier (STOC) refines the training data to exclude anomalous training examples, using an ensemble of machine learning models. The ensemble of models are retrained on the refined training data. The STOC can also use the refined training data to train a representation learning model to generate one or more feature values for each training example, which can be processed by the trained ensemble of models and eventually used for training an output classifier model to predict whether input data is indicative of anomalous or non-anomalous data.
-
公开(公告)号:WO2022169954A1
公开(公告)日:2022-08-11
申请号:PCT/US2022/015085
申请日:2022-02-03
Applicant: GOOGLE LLC
Inventor: ARIK, Sercan Omer , SEO, Sungyong , JIN, Minho , YOON, Jinsung , PFISTER, Tomas
Abstract: The present disclosure provides a method to integrate prior knowledge (referred to as rules) into deep learning in a way that can be controllable at inference without retraining or tuning the model. Deep Neural Networks with Controllable Rule Representations (DNN-CRR) incorporate a rule encoder into the model architecture, which is coupled with a corresponding rule -based objective for enabling a shared representation to be used in decision making by learning both the original task and the rule. DNN-CRR is agnostic to data type and encoder architecture and can be applied to any kind of rule defined for inputs and/or outputs. In real-world domains where incorporating rules is critical, such as prediction tasks in Physics, Retail, and Healthcare.
-
-
-
-
-