Systems and methods for weighted quantization

    公开(公告)号:US11775589B2

    公开(公告)日:2023-10-03

    申请号:US17001850

    申请日:2020-08-25

    Applicant: Google LLC

    CPC classification number: G06F16/906 G06F16/24578 G06F16/258 H03M7/30

    Abstract: Generally, the present disclosure is directed to systems and methods of quantizing a database with respect to a novel loss or quantization error function which applies a weight to an error measurement of quantized elements respectively corresponding to the datapoints in the database. The weight is determined based on the magnitude of an inner product between the respective datapoints and a query compared therewith. In contrast to previous work, embodiments of the proposed loss function are responsive to the expected magnitude of an inner product between the respective datapoints and a query compared therewith and can prioritize error reduction for higher-ranked pairings of the query and the datapoints. Thus, the systems and methods of the present disclosure provide solutions to some of the problems with traditional quantization approaches, which regard all error as equally impactful.

    Spherical random features for polynomial kernels

    公开(公告)号:US11636384B1

    公开(公告)日:2023-04-25

    申请号:US16595093

    申请日:2019-10-07

    Applicant: GOOGLE LLC

    Abstract: Implementations provide for use of spherical random features for polynomial kernels and large-scale learning. An example method includes receiving a polynomial kernel, approximating the polynomial kernel by generating a nonlinear randomized feature map, and storing the nonlinear feature map. Generating the nonlinear randomized feature map includes determining optimal coefficient values and standard deviation values for the polynomial kernel, determining an optimal probability distribution of vector values for the polynomial kernel based on a sum of Gaussian kernels that use the optimal coefficient values, selecting a sample of the vectors, and determining the nonlinear randomized feature map using the sampled vectors. Another example method includes normalizing a first feature vector for a data item, transforming the first feature vector into a second feature vector using a feature map that approximates a polynomial kernel with an explicit nonlinear feature map, and providing the second feature vector to a support vector machine.

    Adaptive Optimization with Improved Convergence

    公开(公告)号:US20230113984A1

    公开(公告)日:2023-04-13

    申请号:US18081403

    申请日:2022-12-14

    Applicant: Google LLC

    Abstract: Generally, the present disclosure is directed to systems and methods that perform adaptive optimization with improved convergence properties. The adaptive optimization techniques described herein are useful in various optimization scenarios, including, for example, training a machine-learned model such as, for example, a neural network. In particular, according to one aspect of the present disclosure, a system implementing the adaptive optimization technique can, over a plurality of iterations, employ an adaptive learning rate while also ensuring that the learning rate is non-increasing.

    Kernelized Classifiers in Neural Networks

    公开(公告)号:US20220366260A1

    公开(公告)日:2022-11-17

    申请号:US17245892

    申请日:2021-04-30

    Applicant: Google LLC

    Abstract: A method includes receiving, by a computing device, training data to train a neural network, wherein the training data comprises a plurality of inputs and a plurality of corresponding labels. The method also includes mapping, by a representation learner of the neural network, the plurality of inputs to a plurality of feature vectors. The method additionally includes training a kernelized classification layer of the neural network to perform nonlinear classification of an input feature vector into one of a plurality of classes, wherein the kernelized classification layer is based on a kernel which enables the nonlinear classification, and wherein the kernel is selected from a space of positive definite kernels based on application of a nonlinear softmax loss function to the plurality of feature vectors and the plurality of corresponding labels. The method further includes outputting a trained neural network comprising the representation learner and the trained kernelized classification layer.

    Federated Learning with Only Positive Labels

    公开(公告)号:US20210326757A1

    公开(公告)日:2021-10-21

    申请号:US17227851

    申请日:2021-04-12

    Applicant: Google LLC

    Abstract: Generally, the present disclosure is directed to systems and methods that perform spreadout regularization to enable learning of a multi-class classification model in the federated setting, where each user has access to the positive data associated with only a limited number of classes (e.g., a single class). Examples of such settings include decentralized training of face recognition models or speaker identification models, where in addition to the user specific facial images and voice samples, the class embeddings for the users also constitute sensitive information that cannot be shared with other users.

Patent Agency Ranking