-
公开(公告)号:US20250118064A1
公开(公告)日:2025-04-10
申请号:US18913134
申请日:2024-10-11
Applicant: Google LLC
Inventor: Noam M. Shazeer , Lukasz Mieczyslaw Kaiser , Jakob D. Uszkoreit , Niki J. Parmar , Ashish Teku Vaswani
IPC: G06V10/82 , G06F18/21 , G06F18/213 , G06F18/28 , G06N3/04 , G06N3/084 , G06T3/4053 , G06V10/56 , G06V10/77
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an output image. In one aspect, one of the methods includes generating the output image intensity value by intensity value according to a generation order of pixel-color channel pairs from the output image, comprising, for each particular generation order position in the generation order: generating a current output image representation of a current output image, processing the current output image representation using a decoder neural network to generate a probability distribution over possible intensity values for the pixel—color channel pair at the particular generation order position, wherein the decoder neural network includes one or more local masked self-attention sub-layers; and selecting an intensity value for the pixel—color channel pair at the particular generation order position using the probability distribution.
-
公开(公告)号:US20240193926A1
公开(公告)日:2024-06-13
申请号:US18388178
申请日:2023-11-08
Applicant: Google LLC
Inventor: Noam M. Shazeer , Lukasz Mieczyslaw Kaiser , Jakob D. Uszkoreit , Niki J. Parmar , Ashish Teku Vaswani
IPC: G06V10/82 , G06F18/21 , G06F18/213 , G06F18/28 , G06N3/04 , G06N3/084 , G06T3/4053 , G06V10/56 , G06V10/77
CPC classification number: G06V10/82 , G06F18/213 , G06F18/217 , G06F18/28 , G06N3/04 , G06N3/084 , G06T3/4053 , G06V10/56 , G06V10/7715
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an output image. In one aspect, one of the methods includes generating the output image intensity value by intensity value according to a generation order of pixel-color channel pairs from the output image, comprising, for each particular generation order position in the generation order: generating a current output image representation of a current output image, processing the current output image representation using a decoder neural network to generate a probability distribution over possible intensity values for the pixel-color channel pair at the particular generation order position, wherein the decoder neural network includes one or more local masked self-attention sub-layers; and selecting an intensity value for the pixel-color channel pair at the particular generation order position using the probability distribution.
-
公开(公告)号:US11893483B2
公开(公告)日:2024-02-06
申请号:US16988547
申请日:2020-08-07
Applicant: Google LLC
Inventor: Noam M. Shazeer , Aidan Nicholas Gomez , Lukasz Mieczyslaw Kaiser , Jakob D. Uszkoreit , Llion Owen Jones , Niki J. Parmar , Illia Polosukhin , Ashish Teku Vaswani
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an output sequence from an input sequence. In one aspect, one of the systems includes an encoder neural network configured to receive the input sequence and generate encoded representations of the network inputs, the encoder neural network comprising a sequence of one or more encoder subnetworks, each encoder subnetwork configured to receive a respective encoder subnetwork input for each of the input positions and to generate a respective subnetwork output for each of the input positions, and each encoder subnetwork comprising: an encoder self-attention sub-layer that is configured to receive the subnetwork input for each of the input positions and, for each particular input position in the input order: apply an attention mechanism over the encoder subnetwork inputs using one or more queries derived from the encoder subnetwork input at the particular input position.
-
公开(公告)号:US20230076971A1
公开(公告)日:2023-03-09
申请号:US17867242
申请日:2022-07-18
Applicant: Google LLC
Inventor: Noam M. Shazeer , Lukasz Mieczyslaw Kaiser , Jakob D. Uszkoreit , Niki J. Parmar , Ashish Teku Vaswani
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an output image. In one aspect, one of the methods includes generating the output image intensity value by intensity value according to a generation order of pixel—color channel pairs from the output image, comprising, for each particular generation order position in the generation order: generating a current output image representation of a current output image, processing the current output image representation using a decoder neural network to generate a probability distribution over possible intensity values for the pixel—color channel pair at the particular generation order position, wherein the decoder neural network includes one or more local masked self-attention sub-layers; and selecting an intensity value for the pixel—color channel pair at the particular generation order position using the probability distribution.
-
公开(公告)号:US11113602B2
公开(公告)日:2021-09-07
申请号:US16932422
申请日:2020-07-17
Applicant: Google LLC
Inventor: Noam M. Shazeer , Aidan Nicholas Gomez , Lukasz Mieczyslaw Kaiser , Jakob D. Uszkoreit , Llion Owen Jones , Niki J. Parmar , Illia Polosukhin , Ashish Teku Vaswani
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an output sequence from an input sequence. In one aspect, one of the systems includes an encoder neural network configured to receive the input sequence and generate encoded representations of the network inputs, the encoder neural network comprising a sequence of one or more encoder subnetworks, each encoder subnetwork configured to receive a respective encoder subnetwork input for each of the input positions and to generate a respective subnetwork output for each of the input positions, and each encoder subnetwork comprising: an encoder self-attention sub-layer that is configured to receive the subnetwork input for each of the input positions and, for each particular input position in the input order: apply an attention mechanism over the encoder subnetwork inputs using one or more queries derived from the encoder subnetwork input at the particular input position.
-
公开(公告)号:US20200372357A1
公开(公告)日:2020-11-26
申请号:US16932422
申请日:2020-07-17
Applicant: Google LLC
Inventor: Noam M. Shazeer , Aidan Nicholas Gomez , Lukasz Mieczyslaw Kaiser , Jakob D. Uszkoreit , Llion Owen Jones , Niki J. Parmar , Illia Polosukhin , Ashish Teku Vaswani
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an output sequence from an input sequence. In one aspect, one of the systems includes an encoder neural network configured to receive the input sequence and generate encoded representations of the network inputs, the encoder neural network comprising a sequence of one or more encoder subnetworks, each encoder subnetwork configured to receive a respective encoder subnetwork input for each of the input positions and to generate a respective subnetwork output for each of the input positions, and each encoder subnetwork comprising: an encoder self-attention sub-layer that is configured to receive the subnetwork input for each of the input positions and, for each particular input position in the input order: apply an attention mechanism over the encoder subnetwork inputs using one or more queries derived from the encoder subnetwork input at the particular input position.
-
公开(公告)号:US10789427B2
公开(公告)日:2020-09-29
申请号:US16689025
申请日:2019-11-19
Applicant: Google LLC
Inventor: Noam M. Shazeer , Aidan Nicholas Gomez , Lukasz Mieczyslaw Kaiser , Jakob D. Uszkoreit , Llion Owen Jones , Niki J. Parmar , Ashish Teku Vaswani
IPC: G06F40/284 , G06K9/62 , G06N3/04 , G06N3/08
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for training a machine learning model to perform multiple machine learning tasks from multiple machine learning domains. One system includes a machine learning model that includes multiple input modality neural networks corresponding to respective different modalities and being configured to map received data inputs of the corresponding modality to mapped data inputs from a unified representation space; an encoder neural network configured to process mapped data inputs from the unified representation space to generate respective encoder data outputs; a decoder neural network configured to process encoder data outputs to generate respective decoder data outputs from the unified representation space; and multiple output modality neural networks corresponding to respective different modalities and being configured to map decoder data outputs to data outputs of the corresponding modality.
-
公开(公告)号:US10719764B2
公开(公告)日:2020-07-21
申请号:US16559392
申请日:2019-09-03
Applicant: Google LLC
Inventor: Noam M. Shazeer , Aidan Nicholas Gomez , Lukasz Mieczyslaw Kaiser , Jakob D. Uszkoreit , Llion Owen Jones , Niki J. Parmar , Illia Polosukhin , Ashish Teku Vaswani
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an output sequence from an input sequence. In one aspect, one of the systems includes an encoder neural network configured to receive the input sequence and generate encoded representations of the network inputs, the encoder neural network comprising a sequence of one or more encoder subnetworks, each encoder subnetwork configured to receive a respective encoder subnetwork input for each of the input positions and to generate a respective subnetwork output for each of the input positions, and each encoder subnetwork comprising: an encoder self-attention sub-layer that is configured to receive the subnetwork input for each of the input positions and, for each particular input position in the input order: apply an attention mechanism over the encoder subnetwork inputs using one or more queries derived from the encoder subnetwork input at the particular input position.
-
公开(公告)号:US20200089755A1
公开(公告)日:2020-03-19
申请号:US16689025
申请日:2019-11-19
Applicant: Google LLC
Inventor: Noam M. Shazeer , Aidan Nicholas Gomez , Lukasz Mieczyslaw Kaiser , Jakob D. Uszkoreit , Llion Owen Jones , Niki J. Parmar , Ashish Teku Vaswani
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for training a machine learning model to perform multiple machine learning tasks from multiple machine learning domains. One system includes a machine learning model that includes multiple input modality neural networks corresponding to respective different modalities and being configured to map received data inputs of the corresponding modality to mapped data inputs from a unified representation space; an encoder neural network configured to process mapped data inputs from the unified representation space to generate respective encoder data outputs; a decoder neural network configured to process encoder data outputs to generate respective decoder data outputs from the unified representation space; and multiple output modality neural networks corresponding to respective different modalities and being configured to map decoder data outputs to data outputs of the corresponding modality.
-
公开(公告)号:US10452978B2
公开(公告)日:2019-10-22
申请号:US16021971
申请日:2018-06-28
Applicant: Google LLC
Inventor: Noam M. Shazeer , Aidan Nicholas Gomez , Lukasz Mieczyslaw Kaiser , Jakob D. Uszkoreit , Llion Owen Jones , Niki J. Parmar , Illia Polosukhin , Ashish Teku Vaswani
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an output sequence from an input sequence. In one aspect, one of the systems includes an encoder neural network configured to receive the input sequence and generate encoded representations of the network inputs, the encoder neural network comprising a sequence of one or more encoder subnetworks, each encoder subnetwork configured to receive a respective encoder subnetwork input for each of the input positions and to generate a respective subnetwork output for each of the input positions, and each encoder subnetwork comprising: an encoder self-attention sub-layer that is configured to receive the subnetwork input for each of the input positions and, for each particular input position in the input order: apply an attention mechanism over the encoder subnetwork inputs using one or more queries derived from the encoder subnetwork input at the particular input position.
-
-
-
-
-
-
-
-
-