-
公开(公告)号:US20240144006A1
公开(公告)日:2024-05-02
申请号:US18407299
申请日:2024-01-08
Applicant: Google LLC
Inventor: Noam M. Shazeer , Aidan Nicholas Gomez , Lukasz Mieczyslaw Kaiser , Jakob D. Uszkoreit , Llion Owen Jones , Niki J. Parmar , Illia Polosukhin , Ashish Teku Vaswani
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an output sequence from an input sequence. In one aspect, one of the systems includes an encoder neural network configured to receive the input sequence and generate encoded representations of the network inputs, the encoder neural network comprising a sequence of one or more encoder subnetworks, each encoder subnetwork configured to receive a respective encoder subnetwork input for each of the input positions and to generate a respective subnetwork output for each of the input positions, and each encoder subnetwork comprising: an encoder self-attention sub-layer that is configured to receive the subnetwork input for each of the input positions and, for each particular input position in the input order: apply an attention mechanism over the encoder subnetwork inputs using one or more queries derived from the encoder subnetwork input at the particular input position.
-
公开(公告)号:US20240020491A1
公开(公告)日:2024-01-18
申请号:US18374071
申请日:2023-09-28
Applicant: Google LLC
Inventor: Zhifeng Chen , Macduff Richard Hughes , Yonghui Wu , Michael Schuster , Xu Chen , Llion Owen Jones , Niki J. Parmar , George Foster , Orhan Firat , Ankur Bapna , Wolfgang Macherey , Melvin Jose Johnson Premkumar
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for machine translation using neural networks. In some implementations, a text in one language is translated into a second language using a neural network model. The model can include an encoder neural network comprising a plurality of bidirectional recurrent neural network layers. The encoding vectors are processed using a multi-headed attention module configured to generate multiple attention context vectors for each encoding vector. A decoder neural network generates a sequence of decoder output vectors using the attention context vectors. The decoder output vectors can represent distributions over various language elements of the second language, allowing a translation of the text into the second language to be determined based on the sequence of decoder output vectors.
-
公开(公告)号:US11816884B2
公开(公告)日:2023-11-14
申请号:US17867242
申请日:2022-07-18
Applicant: Google LLC
Inventor: Noam M. Shazeer , Lukasz Mieczyslaw Kaiser , Jakob D. Uszkoreit , Niki J. Parmar , Ashish Teku Vaswani
IPC: G06V10/82 , G06N3/084 , G06N3/04 , G06T3/40 , G06F18/28 , G06F18/213 , G06F18/21 , G06V10/77 , G06V10/56
CPC classification number: G06V10/82 , G06F18/213 , G06F18/217 , G06F18/28 , G06N3/04 , G06N3/084 , G06T3/4053 , G06V10/56 , G06V10/7715
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an output image. In one aspect, one of the methods includes generating the output image intensity value by intensity value according to a generation order of pixel—color channel pairs from the output image, comprising, for each particular generation order position in the generation order: generating a current output image representation of a current output image, processing the current output image representation using a decoder neural network to generate a probability distribution over possible intensity values for the pixel—color channel pair at the particular generation order position, wherein the decoder neural network includes one or more local masked self-attention sub-layers; and selecting an intensity value for the pixel—color channel pair at the particular generation order position using the probability distribution.
-
公开(公告)号:US11809834B2
公开(公告)日:2023-11-07
申请号:US17459041
申请日:2021-08-27
Applicant: Google LLC
Inventor: Zhifeng Chen , Macduff Richard Hughes , Yonghui Wu , Michael Schuster , Xu Chen , Llion Owen Jones , Niki J. Parmar , George Foster , Orhan Firat , Ankur Bapna , Wolfgang Macherey , Melvin Jose Johnson Premkumar
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for machine translation using neural networks. In some implementations, a text in one language is translated into a second language using a neural network model. The model can include an encoder neural network comprising a plurality of bidirectional recurrent neural network layers. The encoding vectors are processed using a multi-headed attention module configured to generate multiple attention context vectors for each encoding vector. A decoder neural network generates a sequence of decoder output vectors using the attention context vectors. The decoder output vectors can represent distributions over various language elements of the second language, allowing a translation of the text into the second language to be determined based on the sequence of decoder output vectors.
-
公开(公告)号:US20210390410A1
公开(公告)日:2021-12-16
申请号:US17347416
申请日:2021-06-14
Applicant: Google LLC
Inventor: Ashish Teku Vaswani , Prajit Ramachandran , Aravind Srinivas Lakshminarayanan , Blake Alan Hechtman , Niki J. Parmar
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing images using a computer vision neural network that has one or more local self-attention layers. Each local self-attention layer is configured to apply or more local self-attention mechanisms to the layer input to the local self-attention layer.
-
公开(公告)号:US10956819B2
公开(公告)日:2021-03-23
申请号:US16988518
申请日:2020-08-07
Applicant: Google LLC
Inventor: Noam M. Shazeer , Aidan Nicholas Gomez , Lukasz Mieczyslaw Kaiser , Jakob D. Uszkoreit , Llion Owen Jones , Niki J. Parmar , Illia Polosukhin , Ashish Teku Vaswani
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an output sequence from an input sequence. In one aspect, one of the systems includes an encoder neural network configured to receive the input sequence and generate encoded representations of the network inputs, the encoder neural network comprising a sequence of one or more encoder subnetworks, each encoder subnetwork configured to receive a respective encoder subnetwork input for each of the input positions and to generate a respective subnetwork output for each of the input positions, and each encoder subnetwork comprising: an encoder self-attention sub-layer that is configured to receive the subnetwork input for each of the input positions and, for each particular input position in the input order: apply an attention mechanism over the encoder subnetwork inputs using one or more queries derived from the encoder subnetwork input at the particular input position.
-
公开(公告)号:US20210019624A1
公开(公告)日:2021-01-21
申请号:US16988535
申请日:2020-08-07
Applicant: Google LLC
Inventor: Noam M. Shazeer , Aidan Nicholas Gomez , Lukasz Mieczyslaw Kaiser , Jakob D. Uszkoreit , Llion Owen Jones , Niki J. Parmar , Illia Polosukhin , Ashish Teku Vaswani
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an output sequence from an input sequence. In one aspect, one of the systems includes an encoder neural network configured to receive the input sequence and generate encoded representations of the network inputs, the encoder neural network comprising a sequence of one or more encoder subnetworks, each encoder subnetwork configured to receive a respective encoder subnetwork input for each of the input positions and to generate a respective subnetwork output for each of the input positions, and each encoder subnetwork comprising: an encoder self-attention sub-layer that is configured to receive the subnetwork input for each of the input positions and, for each particular input position in the input order: apply an attention mechanism over the encoder subnetwork inputs using one or more queries derived from the encoder subnetwork input at the particular input position.
-
公开(公告)号:US20210019623A1
公开(公告)日:2021-01-21
申请号:US16988518
申请日:2020-08-07
Applicant: Google LLC
Inventor: Noam M. Shazeer , Aidan Nicholas Gomez , Lukasz Mieczyslaw Kaiser , Jakob D. Uszkoreit , Llion Owen Jones , Niki J. Parmar , Illia Polosukhin , Ashish Teku Vaswani
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an output sequence from an input sequence. In one aspect, one of the systems includes an encoder neural network configured to receive the input sequence and generate encoded representations of the network inputs, the encoder neural network comprising a sequence of one or more encoder subnetworks, each encoder subnetwork configured to receive a respective encoder subnetwork input for each of the input positions and to generate a respective subnetwork output for each of the input positions, and each encoder subnetwork comprising: an encoder self-attention sub-layer that is configured to receive the subnetwork input for each of the input positions and, for each particular input position in the input order: apply an attention mechanism over the encoder subnetwork inputs using one or more queries derived from the encoder subnetwork input at the particular input position.
-
公开(公告)号:US20200372358A1
公开(公告)日:2020-11-26
申请号:US16988547
申请日:2020-08-07
Applicant: Google LLC
Inventor: Noam M. Shazeer , Aidan Nicholas Gomez , Lukasz Mieczyslaw Kaiser , Jakob D. Uszkoreit , Llion Owen Jones , Niki J. Parmar , Illia Polosukhin , Ashish Teku Vaswani
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an output sequence from an input sequence. In one aspect, one of the systems includes an encoder neural network configured to receive the input sequence and generate encoded representations of the network inputs, the encoder neural network comprising a sequence of one or more encoder subnetworks, each encoder subnetwork configured to receive a respective encoder subnetwork input for each of the input positions and to generate a respective subnetwork output for each of the input positions, and each encoder subnetwork comprising: an encoder self-attention sub-layer that is configured to receive the subnetwork input for each of the input positions and, for each particular input position in the input order: apply an attention mechanism over the encoder subnetwork inputs using one or more queries derived from the encoder subnetwork input at the particular input position.
-
-
-
-
-
-
-
-