FLEXIBLE MACHINE LEARNING MODEL COMPRESSION

    公开(公告)号:US20250148357A1

    公开(公告)日:2025-05-08

    申请号:US18504016

    申请日:2023-11-07

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for compresses a machine learning model having a plurality of parameters. In one aspect, one of the methods includes obtaining trained values of a set of parameters for at least a portion of a machine learning model; identifying one or more dense ranges for the trained values; determining a least number of bits required to represent each trained value within the one or more dense ranges; identifying a second format having a range that is smaller than a range of the first format; and generating a compressed version of the at least a portion of the machine learning model.

    TRAINING NEURAL NETWORKS USING DISTRIBUTED BATCH NORMALIZATION

    公开(公告)号:US20240378416A1

    公开(公告)日:2024-11-14

    申请号:US18444267

    申请日:2024-02-16

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including instructions encoded on storage media, for performing reduction of gradient vectors for distributed training of a neural network. One of the methods includes receiving, at each of the plurality of devices, a respective batch; performing, by each device, a forward pass comprising, for each batch normalization layer: generating, by each of the devices, a respective output of the corresponding other layer for each training example in the batch, determining, by each of the devices, a per-replica mean and a per-replica variance; determining, for each sub-group, a distributed mean and a distributed variance from the per-replica means and the per-replica variances for the devices in the sub-group; and applying, by each device, batch normalization to the respective outputs of the corresponding other layer generated by the device using the distributed mean and the distributed variance for the sub-group to which the device belongs.

    Training neural networks using distributed batch normalization

    公开(公告)号:US11907825B2

    公开(公告)日:2024-02-20

    申请号:US16659543

    申请日:2019-10-21

    Applicant: Google LLC

    CPC classification number: G06N3/044 G06N3/04 G06N3/08 G06N3/084 G06V10/82

    Abstract: Methods, systems, and apparatus, including instructions encoded on storage media, for performing reduction of gradient vectors for distributed training of a neural network. One of the methods includes receiving, at each of the plurality of devices, a respective batch; performing, by each device, a forward pass comprising, for each batch normalization layer: generating, by each of the devices, a respective output of the corresponding other layer for each training example in the batch, determining, by each of the devices, a per-replica mean and a per-replica variance; determining, for each sub-group, a distributed mean and a distributed variance from the per-replica means and the per-replica variances for the devices in the sub-group; and applying, by each device, batch normalization to the respective outputs of the corresponding other layer generated by the device using the distributed mean and the distributed variance for the sub-group to which the device belongs.

    APPROXIMATE K NEAREST NEIGHBORS ON HARDWARE ACCELERATORS

    公开(公告)号:US20230418797A1

    公开(公告)日:2023-12-28

    申请号:US18341697

    申请日:2023-06-26

    Applicant: Google LLC

    CPC classification number: G06F16/2237 G06F16/285

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing a kNN computation using a hardware accelerator. One of the methods includes obtaining a set of one or more query vectors; obtaining a set of database vectors; and performing, on a hardware accelerator and for each query vector in the set, a search for the k most similar database vectors to the query vector, comprising: computing, by circuitry of the hardware accelerator and for each query vector, a respective similarity value between the query vector and each database vector; and for each query vector, identifying, by the hardware accelerator and for each bin, (i) an index of the most similar database vector within the bin and (ii) the respective similarity value for the most similar database vector within the bin.

Patent Agency Ranking