SPARSE RECOVERY AUTOENCODER
    14.
    发明申请

    公开(公告)号:US20190385063A1

    公开(公告)日:2019-12-19

    申请号:US16442203

    申请日:2019-06-14

    Applicant: GOOGLE LLC

    Abstract: A sparse dataset is encoded using a data-driven learned sensing matrix. For example, an example method includes receiving a dataset of sparse vectors with dimension d from a requesting process, initializing an encoding matrix of dimension k×d, selecting a subset of sparse vectors from the dataset, and updating the encoding matrix via machine learning. Updating the encoding matrix includes using a linear encoder to generate an encoded vector of dimension k for each vector in the subset, the linear encoder using the encoding matrix, using a non-linear decoder to decode each of the encoded vectors, the non-linear decoder using a transpose of the encoding matrix in a projected subgradient, and adjusting the encoding matrix using back propagation. The method also includes returning an embedding of each sparse vector in the dataset of sparse vectors, the embedding being generated with the updated encoding matrix.

    Sparse recovery autoencoder
    18.
    发明授权

    公开(公告)号:US12033080B2

    公开(公告)日:2024-07-09

    申请号:US16442203

    申请日:2019-06-14

    Applicant: GOOGLE LLC

    CPC classification number: G06N3/084 G06F17/16 G06N3/02 G06N3/045

    Abstract: A sparse dataset is encoded using a data-driven learned sensing matrix. For example, an example method includes receiving a dataset of sparse vectors with dimension d from a requesting process, initializing an encoding matrix of dimension k×d, selecting a subset of sparse vectors from the dataset, and updating the encoding matrix via machine learning. Updating the encoding matrix includes using a linear encoder to generate an encoded vector of dimension k for each vector in the subset, the linear encoder using the encoding matrix, using a non-linear decoder to decode each of the encoded vectors, the non-linear decoder using a transpose of the encoding matrix in a projected subgradient, and adjusting the encoding matrix using back propagation. The method also includes returning an embedding of each sparse vector in the dataset of sparse vectors, the embedding being generated with the updated encoding matrix.

    Controlled adaptive optimization
    19.
    发明授权

    公开(公告)号:US11775823B2

    公开(公告)日:2023-10-03

    申请号:US17014139

    申请日:2020-09-08

    Applicant: Google LLC

    CPC classification number: G06N3/08 G06N3/045

    Abstract: Generally, the present disclosure is directed to systems and methods that perform adaptive optimization with improved convergence properties. The adaptive optimization techniques described herein are useful in various optimization scenarios, including, for example, training a machine-learned model such as, for example, a neural network. In particular, according to one aspect of the present disclosure, a system implementing the adaptive optimization technique can, over a plurality of iterations, employ an adaptive effective learning rate while also ensuring that the effective learning rate is non-increasing.

    Multiscale Quantization for Fast Similarity Search

    公开(公告)号:US20230123941A1

    公开(公告)日:2023-04-20

    申请号:US18081376

    申请日:2022-12-14

    Applicant: Google LLC

    Abstract: The present disclosure provides systems and methods that include or otherwise leverage use of a multiscale quantization model that is configured to provide a quantized dataset. In particular, the multiscale quantization model can receive and perform vector quantization of a first dataset. The multiscale quantization model can generate a residual dataset based at least in part on a result of the vector quantization. The multiscale quantization model can apply a rotation matrix to the residual dataset to generate a rotated residual dataset that includes a plurality of rotated residuals. The multiscale quantization model can perform reparameterization of each rotated residual in the rotated residual dataset into a direction component and a scale component. The multiscale quantization model can perform product quantization of the direction components of the plurality of rotated residuals, and perform scalar quantization of the scale components of the plurality of rotated residuals.

Patent Agency Ranking