-
公开(公告)号:US20240256889A1
公开(公告)日:2024-08-01
申请号:US18565510
申请日:2021-05-31
Applicant: ROBERT BOSCH GMBH , TSINGHUA UNIVERSITY
Inventor: Hang Su , Jun Zhu , Tianyu Pang , Xiao Yang , Yinpeng Dong , Zhijie Deng , Ze Cheng
IPC: G06N3/094
CPC classification number: G06N3/094
Abstract: A method for deep learning. The method includes: receiving, by a deep learning model, a plurality of samples and a plurality of labels corresponding to the plurality of samples; adversarially augmenting, by the deep learning model, the plurality of samples based on a threat model; and assigning, by the deep learning model, a low predictive confidence to one or more adversarially augmented samples of the plurality of adversarially augmented samples having noisy labels due to the adversarially augmenting based on the threat model.
-
2.
公开(公告)号:US20240086716A1
公开(公告)日:2024-03-14
申请号:US18263576
申请日:2021-02-26
Applicant: Robert Bosch GmbH , TSINGHUA UNIVERSITY
Inventor: Hang Su , Jun Zhu , Zhijie Deng , Ze Cheng
IPC: G06N3/094
CPC classification number: G06N3/094
Abstract: A method for training a deep neural network (DNN) capable of adversarial detection. The DNN is configured with a plurality of sets of weights candidates. The method includes inputting training data selected from training data set to the DNN. The method further includes calculating, based on the training data, a first term for indicating a difference between a variational posterior probability distribution and a true posterior probability distribution of the DNN. The method further includes perturbing the training data to generate perturbed training data; and calculating a second term for indicating a quantification of predictive uncertainty on the perturbed training data. The method further includes updating the plurality of sets of weights candidates of the DNN based on augmenting the summation of the first term and the second term.
-
公开(公告)号:US20240037390A1
公开(公告)日:2024-02-01
申请号:US18249162
申请日:2020-10-15
Applicant: Robert Bosch GmbH , Tsinghua University
Inventor: Jun Zhu , Zhijie Deng , Yinpeng Dong , Chao Zhang , Kevin Yang
Abstract: A method for training a weight-sharing neural network with stochastic architectures is disclosed. The method includes (i) selecting a mini-batch from a plurality of mini-batches, a training data set for a task being grouped into the plurality of mini-batches and each of the plurality of mini-batches comprising a plurality of instances: (ii) stochastically selecting a plurality of network architectures of the neural network for the selected mini-batch; (iii) obtaining a loss for each instance of the selected mini-batch by applying the instance to one of the plurality of network architectures; and (iv) updating shared weights of the neural network based on the loss for each instance of the selected mini-batch.
-
-