-
公开(公告)号:US20230419555A1
公开(公告)日:2023-12-28
申请号:US18461292
申请日:2023-09-05
Applicant: Google LLC
Inventor: David Charles Minnen , Saurabh Singh
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for channel-wise autoregressive entropy models. In one aspect, a method includes processing data using a first encoder neural network to generate a latent representation of the data. The latent representation of data is processed by a quantizer and a second encoder neural network to generate a quantized latent representation of data and a latent representation of an entropy model. The latent representation of data is further processed into a plurality of slices of quantized latent representations of data wherein the slices are arranged in an ordinal sequence. A hyperprior processing network generates a hyperprior parameters and a compressed representation of the hyperprior parameters. For each slice, a corresponding compressed representation is generated using a corresponding slice processing network wherein a combination of the compressed representations form a compressed representation of the data.
-
公开(公告)号:US20230237332A1
公开(公告)日:2023-07-27
申请号:US18175125
申请日:2023-02-27
Applicant: GOOGLE LLC
Inventor: Abhinav Shrivastava , Saurabh Singh , Johannes Ballé , Sami Ahmad Abu-El-Haija , Nicholas Milo Johnston , George Dan Toderici
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving, by a neural network (NN), a dataset for generating features from the dataset. A first set of features is computed from the dataset using at least a feature layer of the NN. The first set of features i) is characterized by a measure of informativeness; and ii) is computed such that a size of the first set of features is compressible into a second set of features that is smaller in size than the first set of features and that has a same measure of informativeness as the measure of informativeness of the first set of features. The second set of features if generated from the first set of features using a compression method that compresses the first set of features to generate the second set of features.
-
公开(公告)号:US20230154051A1
公开(公告)日:2023-05-18
申请号:US17919460
申请日:2020-04-17
Applicant: Google LLC
Inventor: Danhang Tang , Saurabh Singh , Cem Keskin , Phillip Andrew Chou , Christian Haene , Mingsong Dou , Sean Ryan Francesco Fanello , Jonathan Taylor , Andrea Tagliasacchi , Philip Lindsley Davidson , Yinda Zhang , Onur Gonen Guleryuz , Shahram Izadi , Sofien Bouaziz
IPC: G06T9/00
Abstract: Systems and methods are directed to encoding and/or decoding of the textures/geometry of a three-dimensional volumetric representation. An encoding computing system can obtain voxel blocks from a three-dimensional volumetric representation of an object. The encoding computing system can encode voxel blocks with a machine-learned voxel encoding model to obtain encoded voxel blocks. The encoding computing system can decode the encoded voxel blocks with a machine-learned voxel decoding model to obtain reconstructed voxel blocks. The encoding computing system can generate a reconstructed mesh representation of the object based at least in part on the one or more reconstructed voxel blocks. The encoding computing system can encode textures associated with the voxel blocks according to an encoding scheme and based at least in part on the reconstructed mesh representation of the object to obtain encoded textures.
-
公开(公告)号:US11250595B2
公开(公告)日:2022-02-15
申请号:US16617484
申请日:2018-05-29
Applicant: GOOGLE LLC
Inventor: Michele Covell , Damien Vincent , David Charles Minnen , Saurabh Singh , Sung Jin Hwang , Nicholas Johnston , Joel Eric Shor , George Dan Toderici
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for image compression and reconstruction. An image encoder system receives a request to generate an encoded representation of an input image that has been partitioned into a plurality of tiles and generates the encoded representation of the input image. To generate the encoded representation, the system processes a context for each tile using a spatial context prediction neural network that has been trained to process context for an input tile and generate an output tile that is a prediction of the input tile. The system determines a residual image between the particular tile and the output tile generated by the spatial context prediction neural network by process the context for the particular tile and generates a set of binary codes for the particular tile by encoding the residual image using an encoder neural network.
-
公开(公告)号:US11177823B2
公开(公告)日:2021-11-16
申请号:US15985340
申请日:2018-05-21
Applicant: Google LLC
Inventor: David Charles Minnen , Michele Covell , Saurabh Singh , Sung Jin Hwang , George Dan Toderici
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for compressing and decompressing data. In one aspect, an encoder neural network processes data to generate an output including a representation of the data as an ordered collection of code symbols. The ordered collection of code symbols is entropy encoded using one or more code symbol probability distributions. A compressed representation of the data is determined based on the entropy encoded representation of the collection of code symbols and data indicating the code symbol probability distributions used to entropy encode the collection of code symbols. In another aspect, a compressed representation of the data is decoded to determine the collection of code symbols representing the data. A reconstruction of the data is determined by processing the collection of code symbols by a decoder neural network.
-
公开(公告)号:US11670010B2
公开(公告)日:2023-06-06
申请号:US17578794
申请日:2022-01-19
Applicant: Google LLC
Inventor: David Charles Minnen , Saurabh Singh , Johannes Balle , Troy Chinen , Sung Jin Hwang , Nicholas Johnston , George Dan Toderici
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for compressing and decompressing data. In one aspect, a method comprises: processing data using an encoder neural network to generate a latent representation of the data; processing the latent representation of the data using a hyper-encoder neural network to generate a latent representation of an entropy model; generating an entropy encoded representation of the latent representation of the entropy model; generating an entropy encoded representation of the latent representation of the data using the latent representation of the entropy model; and determining a compressed representation of the data from the entropy encoded representations of: (i) the latent representation of the data and (ii) the latent representation of the entropy model used to entropy encode the latent representation of the data.
-
公开(公告)号:US20240220863A1
公开(公告)日:2024-07-04
申请号:US18409520
申请日:2024-01-10
Applicant: Google LLC
Inventor: Deniz Oktay , Saurabh Singh , Johannes Balle , Abhinav Shrivastava
Abstract: Example aspects of the present disclosure are directed to systems and methods that learn a compressed representation of a machine-learned model (e.g., neural network) via representation of the model parameters within a reparameterization space during training of the model. In particular, the present disclosure describes an end-to-end model weight compression approach that employs a latent-variable data compression method. The model parameters (e.g., weights and biases) are represented in a “latent” or “reparameterization” space, amounting to a reparameterization. In some implementations, this space can be equipped with a learned probability model, which is used first to impose an entropy penalty on the parameter representation during training, and second to compress the representation using arithmetic coding after training. The proposed approach can thus maximize accuracy and model compressibility jointly, in an end-to-end fashion, with the rate-error trade-off specified by a hyperparameter.
-
公开(公告)号:US11907818B2
公开(公告)日:2024-02-20
申请号:US18165211
申请日:2023-02-06
Applicant: Google LLC
Inventor: Deniz Oktay , Saurabh Singh , Johannes Balle , Abhinav Shrivistava
Abstract: Example aspects of the present disclosure are directed to systems and methods that learn a compressed representation of a machine-learned model (e.g., neural network) via representation of the model parameters within a reparameterization space during training of the model. In particular, the present disclosure describes an end-to-end model weight compression approach that employs a latent-variable data compression method. The model parameters (e.g., weights and biases) are represented in a “latent” or “reparameterization” space, amounting to a reparameterization. In some implementations, this space can be equipped with a learned probability model, which is used first to impose an entropy penalty on the parameter representation during training, and second to compress the representation using arithmetic coding after training. The proposed approach can thus maximize accuracy and model compressibility jointly, in an end-to-end fashion, with the rate-error trade-off specified by a hyperparameter.
-
公开(公告)号:US11610124B2
公开(公告)日:2023-03-21
申请号:US16666689
申请日:2019-10-29
Applicant: Google LLC
Inventor: Abhinav Shrivastava , Saurabh Singh , Johannes Balle , Sami Ahmad Abu-El-Haija , Nicholas Johnston , George Dan Toderici
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving, by a neural network (NN), a dataset for generating features from the dataset. A first set of features is computed from the dataset using at least a feature layer of the NN. The first set of features i) is characterized by a measure of informativeness; and ii) is computed such that a size of the first set of features is compressible into a second set of features that is smaller in size than the first set of features and that has a same measure of informativeness as the measure of informativeness of the first set of features. The second set of features if generated from the first set of features using a compression method that compresses the first set of features to generate the second set of features.
-
公开(公告)号:US20220084255A1
公开(公告)日:2022-03-17
申请号:US17021688
申请日:2020-09-15
Applicant: Google LLC
Inventor: David Charles Minnen , Saurabh Singh
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for channel-wise autoregressive entropy models. In one aspect, a method includes processing data using a first encoder neural network to generate a latent representation of the data. The latent representation of data is processed by a quantizer and a second encoder neural network to generate a quantized latent representation of data and a latent representation of an entropy model. The latent representation of data is further processed into a plurality of slices of quantized latent representations of data wherein the slices are arranged in an ordinal sequence. A hyperprior processing network generates a hyperprior parameters and a compressed representation of the hyperprior parameters. For each slice, a corresponding compressed representation is generated using a corresponding slice processing network wherein a combination of the compressed representations form a compressed representation of the data.
-
-
-
-
-
-
-
-
-