-
公开(公告)号:US20220092407A1
公开(公告)日:2022-03-24
申请号:US17029506
申请日:2020-09-23
发明人: Pin-Yu Chen , Sijia Liu , Chia-Yu Chen , I-Hsin Chung , Tsung-Yi Ho , Yun-Yun Tsai
摘要: Transfer learning in machine learning can include receiving a machine learning model. Target domain training data for reprogramming the machine learning model using transfer learning can be received. The target domain training data can be transformed by performing a transformation function on the target domain training data. Output labels of the machine learning model can be mapped to target labels associated with the target domain training data. The transformation function can be trained by optimizing a parameter of the transformation function. The machine learning model can be reprogrammed based on input data transformed by the transformation function and a mapping of the output labels to target labels.
-
公开(公告)号:US12061991B2
公开(公告)日:2024-08-13
申请号:US17029506
申请日:2020-09-23
发明人: Pin-Yu Chen , Sijia Liu , Chia-Yu Chen , I-Hsin Chung , Tsung-Yi Ho , Yun-Yun Tsai
IPC分类号: G06N3/094 , G06N3/08 , G06N3/096 , G06N20/00 , G06F18/213 , G06F18/2134 , G06F18/214
CPC分类号: G06N3/094 , G06N3/08 , G06N3/096 , G06N20/00 , G06F18/213 , G06F18/21347 , G06F18/214
摘要: Transfer learning in machine learning can include receiving a machine learning model. Target domain training data for reprogramming the machine learning model using transfer learning can be received. The target domain training data can be transformed by performing a transformation function on the target domain training data. Output labels of the machine learning model can be mapped to target labels associated with the target domain training data. The transformation function can be trained by optimizing a parameter of the transformation function. The machine learning model can be reprogrammed based on input data transformed by the transformation function and a mapping of the output labels to target labels.
-
公开(公告)号:US20220366231A1
公开(公告)日:2022-11-17
申请号:US17241790
申请日:2021-04-27
发明人: Yada Zhu , Sijia Liu , Aparna Gupta , Sai Radhakrishna Manikant Sarma Palepu , Koushik Kar , Lucian Popa , Kumar Bhaskaran , Nitin Gaur
摘要: A graph neural network can be built and trained to predict a risk of an entity. A multi-relational graph network can include a first graph network and a second graph network. The first graph network can include a first set of nodes and a first set of edges connecting some of the nodes in the first set. The second graph network can include a second set of nodes and a second set of edges connecting some of the nodes in the second set. The first set of nodes and the second set of nodes can represent entities, the first set of edges can represent a first relationship between the entities and the second set of edges can represent a second relationship between the entities. A graph convolutional network (GCN) can be structured to incorporate the multi-relational graph network, and trained to predict a risk associated with a given entity.
-
公开(公告)号:US11341598B2
公开(公告)日:2022-05-24
申请号:US16894343
申请日:2020-06-05
发明人: Ao Liu , Sijia Liu , Abhishek Bhandwaldar , Chuang Gan , Lirong Xia , Qi Cheng Li
摘要: Interpretation maps of deep neural networks are provided that use Renyi differential privacy to guarantee the robustness of the interpretation. In one aspect, a method for generating interpretation maps with guaranteed robustness includes: perturbing an original digital image by adding Gaussian noise to the original digital image to obtain m noisy images; providing the m noisy images as input to a deep neural network; interpreting output from the deep neural network to obtain m noisy interpretations corresponding to the m noisy images; thresholding the m noisy interpretations to obtain a top-k of the m noisy interpretations; and averaging the top-k of the m noisy interpretations to produce an interpretation map with certifiable robustness.
-
公开(公告)号:US20210334646A1
公开(公告)日:2021-10-28
申请号:US16861019
申请日:2020-04-28
发明人: Sijia Liu , Pin-Yu Chen , Gaoyuan Zhang , Chuang Gan
摘要: A method of utilizing a computing device to optimize weights within a neural network to avoid adversarial attacks includes receiving, by a computing device, a neural network for optimization. The method further includes determining, by the computing device, on a region by region basis one or more robustness bounds for weights within the neural network. The robustness bounds indicating values beyond which the neural network generates an erroneous output upon performing an adversarial attack on the neural network. The computing device further averages all robustness bounds on the region by region basis. The computing device additionally optimizes weights for adversarial proofing the neural network based at least in part on the averaged robustness bounds.
-
公开(公告)号:US20210216859A1
公开(公告)日:2021-07-15
申请号:US16742346
申请日:2020-01-14
发明人: Sijia Liu , Gaoyuan Zhang , Pin-Yu Chen , Chuang Gan , Akhilan Boopathy
摘要: Embodiments relate to a system, program product, and method to support a convolutional neural network (CNN). A class-specific discriminative image region is localized to interpret a prediction of a CNN and to apply a class activation map (CAM) function to received input data. First and second attacks are generated on the CNN with respect to the received input data. The first attack generates first perturbed data and a corresponding first CAM, and the second attack generates second perturbed data and a corresponding second CAM. An interpretability discrepancy is measured to quantify one or more differences between the first CAM and the second CAM. The measured interpretability discrepancy is applied to the CNN. The application is a response to an inconsistency between the first CAM and the second CAM and functions to strengthen the CNN against an adversarial attack.
-
公开(公告)号:US11875489B2
公开(公告)日:2024-01-16
申请号:US17363054
申请日:2021-06-30
发明人: Quanfu Fan , Sijia Liu , Richard Chen , Rameswar Panda
CPC分类号: G06T5/50 , G06T7/90 , G06V20/58 , G06T2207/10024 , G06T2207/20081
摘要: A hybrid-distance adversarial patch generator can be trained to generate a hybrid adversarial patch effective at multiple distances. The hybrid patch can be inserted into multiple sample images, each depicting an object, to simulate inclusion of the hybrid patch at multiple distances. The multiple sample images can then be used to train an object detection model to detect the objects.
-
8.
公开(公告)号:US20230113168A1
公开(公告)日:2023-04-13
申请号:US17499815
申请日:2021-10-12
发明人: Songtao Lu , Lior Horesh , Pin-Yu Chen , Sijia Liu , Tianyi Chen
摘要: A reinforcement learning system includes a plurality of agents, each agent having an individual reward function and one or more safety constraints that involve joint actions of the agents, wherein each agent maximizes a team-average long-term return in performing the joint actions, subject to the safety constraints, and participates in operating a physical system. A peer-to-peer communication network is configured to connect the plurality of agents. A distributed constrained Markov decision process (D-CMDP) model is implemented over the peer-to-peer communication network and is configured to perform policy optimization using a decentralized policy gradient (PG) method, wherein the participation of each agent in operating the physical system is based on the D-CMDP model.
-
公开(公告)号:US20220261626A1
公开(公告)日:2022-08-18
申请号:US17170343
申请日:2021-02-08
发明人: Sijia Liu , Gaoyuan ZHANG , Pin-Yu Chen , Chuang Gan , Songtao Lu
摘要: Scalable distributed adversarial training techniques for robust deep neural networks are provided. In one aspect, a method for adversarial training of a deep neural network-based model by distributed computing machines M includes, by distributed computing machines M: obtaining adversarial perturbation-modified training examples for samples in a local dataset D(i); computing gradients of a local cost function fi with respect to parameters θ of the deep neural network-based model using the adversarial perturbation-modified training examples; transmitting the gradients of the local cost function fi to a server which aggregates the gradients of the local cost function fi and transmits an aggregated gradient to the distributed computing machines M; and updating the parameters θ of the deep neural network-based model stored at each of the distributed computing machines M based on the aggregated gradient received from the server. A method for distributed adversarial training of a deep neural network-based model by the server is also provided.
-
公开(公告)号:US20220067505A1
公开(公告)日:2022-03-03
申请号:US17005144
申请日:2020-08-27
发明人: Ao Liu , Sijia Liu , Bo Wu , Lirong Xia , Qi Cheng Li , Chuang Gan
摘要: Interpretation maps of convolutional neural networks having certifiable robustness using Rényi differential privacy are provided. In one aspect, a method for generating an interpretation map includes: adding generalized Gaussian noise to an image x to obtain T noisy images, wherein the generalized Gaussian noise constitutes perturbations to the image x; providing the T noisy images as input to a convolutional neural network; calculating T noisy interpretations of output from the convolutional neural network corresponding to the T noisy images; re-scaling the T noisy interpretations using a scoring vector ν to obtain T re-scaled noisy interpretations; and generating the interpretation map using the T re-scaled noisy interpretations, wherein the interpretation map is robust against the perturbations.
-
-
-
-
-
-
-
-
-