CHANNEL-WISE AUTOREGRESSIVE ENTROPY MODELS FOR IMAGE COMPRESSION

    公开(公告)号:US20230419555A1

    公开(公告)日:2023-12-28

    申请号:US18461292

    申请日:2023-09-05

    Applicant: Google LLC

    CPC classification number: G06T9/002 G06F17/18 G06N3/08 G06N3/045

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for channel-wise autoregressive entropy models. In one aspect, a method includes processing data using a first encoder neural network to generate a latent representation of the data. The latent representation of data is processed by a quantizer and a second encoder neural network to generate a quantized latent representation of data and a latent representation of an entropy model. The latent representation of data is further processed into a plurality of slices of quantized latent representations of data wherein the slices are arranged in an ordinal sequence. A hyperprior processing network generates a hyperprior parameters and a compressed representation of the hyperprior parameters. For each slice, a corresponding compressed representation is generated using a corresponding slice processing network wherein a combination of the compressed representations form a compressed representation of the data.

    LEARNING COMPRESSIBLE FEATURES
    2.
    发明公开

    公开(公告)号:US20230237332A1

    公开(公告)日:2023-07-27

    申请号:US18175125

    申请日:2023-02-27

    Applicant: GOOGLE LLC

    CPC classification number: G06N3/08 G06F17/15 G06F18/24 G06N3/063 G06N3/082

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving, by a neural network (NN), a dataset for generating features from the dataset. A first set of features is computed from the dataset using at least a feature layer of the NN. The first set of features i) is characterized by a measure of informativeness; and ii) is computed such that a size of the first set of features is compressible into a second set of features that is smaller in size than the first set of features and that has a same measure of informativeness as the measure of informativeness of the first set of features. The second set of features if generated from the first set of features using a compression method that compresses the first set of features to generate the second set of features.

    Tiled image compression using neural networks

    公开(公告)号:US11250595B2

    公开(公告)日:2022-02-15

    申请号:US16617484

    申请日:2018-05-29

    Applicant: GOOGLE LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for image compression and reconstruction. An image encoder system receives a request to generate an encoded representation of an input image that has been partitioned into a plurality of tiles and generates the encoded representation of the input image. To generate the encoded representation, the system processes a context for each tile using a spatial context prediction neural network that has been trained to process context for an input tile and generate an output tile that is a prediction of the input tile. The system determines a residual image between the particular tile and the output tile generated by the spatial context prediction neural network by process the context for the particular tile and generates a set of binary codes for the particular tile by encoding the residual image using an encoder neural network.

    Data compression by local entropy encoding

    公开(公告)号:US11177823B2

    公开(公告)日:2021-11-16

    申请号:US15985340

    申请日:2018-05-21

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for compressing and decompressing data. In one aspect, an encoder neural network processes data to generate an output including a representation of the data as an ordered collection of code symbols. The ordered collection of code symbols is entropy encoded using one or more code symbol probability distributions. A compressed representation of the data is determined based on the entropy encoded representation of the collection of code symbols and data indicating the code symbol probability distributions used to entropy encode the collection of code symbols. In another aspect, a compressed representation of the data is decoded to determine the collection of code symbols representing the data. A reconstruction of the data is determined by processing the collection of code symbols by a decoder neural network.

    Compression of Machine-Learned Models via Entropy Penalized Weight Reparameterization

    公开(公告)号:US20240220863A1

    公开(公告)日:2024-07-04

    申请号:US18409520

    申请日:2024-01-10

    Applicant: Google LLC

    CPC classification number: G06N20/00 G06N3/08

    Abstract: Example aspects of the present disclosure are directed to systems and methods that learn a compressed representation of a machine-learned model (e.g., neural network) via representation of the model parameters within a reparameterization space during training of the model. In particular, the present disclosure describes an end-to-end model weight compression approach that employs a latent-variable data compression method. The model parameters (e.g., weights and biases) are represented in a “latent” or “reparameterization” space, amounting to a reparameterization. In some implementations, this space can be equipped with a learned probability model, which is used first to impose an entropy penalty on the parameter representation during training, and second to compress the representation using arithmetic coding after training. The proposed approach can thus maximize accuracy and model compressibility jointly, in an end-to-end fashion, with the rate-error trade-off specified by a hyperparameter.

    Compression of machine-learned models via entropy penalized weight reparameterization

    公开(公告)号:US11907818B2

    公开(公告)日:2024-02-20

    申请号:US18165211

    申请日:2023-02-06

    Applicant: Google LLC

    CPC classification number: G06N20/00 G06N3/08

    Abstract: Example aspects of the present disclosure are directed to systems and methods that learn a compressed representation of a machine-learned model (e.g., neural network) via representation of the model parameters within a reparameterization space during training of the model. In particular, the present disclosure describes an end-to-end model weight compression approach that employs a latent-variable data compression method. The model parameters (e.g., weights and biases) are represented in a “latent” or “reparameterization” space, amounting to a reparameterization. In some implementations, this space can be equipped with a learned probability model, which is used first to impose an entropy penalty on the parameter representation during training, and second to compress the representation using arithmetic coding after training. The proposed approach can thus maximize accuracy and model compressibility jointly, in an end-to-end fashion, with the rate-error trade-off specified by a hyperparameter.

    Learning compressible features
    9.
    发明授权

    公开(公告)号:US11610124B2

    公开(公告)日:2023-03-21

    申请号:US16666689

    申请日:2019-10-29

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving, by a neural network (NN), a dataset for generating features from the dataset. A first set of features is computed from the dataset using at least a feature layer of the NN. The first set of features i) is characterized by a measure of informativeness; and ii) is computed such that a size of the first set of features is compressible into a second set of features that is smaller in size than the first set of features and that has a same measure of informativeness as the measure of informativeness of the first set of features. The second set of features if generated from the first set of features using a compression method that compresses the first set of features to generate the second set of features.

    CHANNEL-WISE AUTOREGRESSIVE ENTROPY MODELS FOR IMAGE COMPRESSION

    公开(公告)号:US20220084255A1

    公开(公告)日:2022-03-17

    申请号:US17021688

    申请日:2020-09-15

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for channel-wise autoregressive entropy models. In one aspect, a method includes processing data using a first encoder neural network to generate a latent representation of the data. The latent representation of data is processed by a quantizer and a second encoder neural network to generate a quantized latent representation of data and a latent representation of an entropy model. The latent representation of data is further processed into a plurality of slices of quantized latent representations of data wherein the slices are arranged in an ordinal sequence. A hyperprior processing network generates a hyperprior parameters and a compressed representation of the hyperprior parameters. For each slice, a corresponding compressed representation is generated using a corresponding slice processing network wherein a combination of the compressed representations form a compressed representation of the data.

Patent Agency Ranking