-
公开(公告)号:US20230289473A1
公开(公告)日:2023-09-14
申请号:US18009765
申请日:2021-06-17
Applicant: The Trustees of Princeton University
Inventor: Sanjeev ARORA , Kai LI , Yangsibo HUANG , Zhao SONG , Danqi CHEN
CPC classification number: G06F21/6254 , G06F21/602 , G06N3/098
Abstract: According to various embodiments, a method for encrypting image data for a neural network are disclosed. The method includes mixing the image data with other datapoints to form mixed data; and applying a pixel-wise random mask to the mixed data to form encrypted data. According to various embodiments, a method for encrypting text data for a neural network for natural language processing is disclosed. The method includes encoding each text datapoint via a pretrained text encoder to form encoded datapoints; mixing the encoded datapoints with other encoded datapoints to form mixed data; applying a random mask to the mixed data to form encrypted data; and incorporating the encrypted data into training a classifier of the neural network and fine-tuning the text encoder.
-
公开(公告)号:US20250148260A1
公开(公告)日:2025-05-08
申请号:US18838096
申请日:2023-02-14
Applicant: The Trustees of Princeton University
Inventor: Karthik NARASIMHAN , Vishvak MURAHARI , Carlos JIMENEZ , Runzhe YANG , Ameet DESHPANDE , Yushan SU , Kai LI
IPC: G06N3/04
Abstract: Disclosed is a technique for improving the throughput of a neural network, using multiplexing and demultiplexing of information. Specifically, the multiplexing may include receiving a plurality of inputs, generating transformed inputs by performing, via a multiplexing layer, a transformation to each input of the plurality of inputs, and combining the transformed inputs into a single compact representation of the plurality of inputs. The demultiplexing may include receiving an output from a neural network, generating a plurality of values by converting, via a demultiplexing layer, the output back into independent representations, and producing predictions for each input based on the plurality of values. Further improvements may be seen when pretraining of the neural network and/or high-throughput transformers are incorporated.
-