Sparse finetuning for artificial neural networks

    公开(公告)号:US11586890B2

    公开(公告)日:2023-02-21

    申请号:US16720380

    申请日:2019-12-19

    Applicant: Arm Limited

    Abstract: The present disclosure advantageously provides a hardware accelerator for an artificial neural network (ANN), including a communication bus interface, a memory, a controller, and at least one processing engine (PE). The communication bus interface is configured to receive a plurality of finetuned weights associated with the ANN, receive input data, and transmit output data. The memory is configured to store the plurality of finetuned weights, the input data and the output data. The PE is configured to receive the input data, execute an ANN model using a plurality of fixed weights associated with the ANN and the plurality of finetuned weights, and generate the output data. Each finetuned weight corresponds to a fixed weight.

    Sparse Finetuning for Artificial Neural Networks

    公开(公告)号:US20210192323A1

    公开(公告)日:2021-06-24

    申请号:US16720380

    申请日:2019-12-19

    Applicant: Arm Limited

    Abstract: The present disclosure advantageously provides a hardware accelerator for an artificial neural network (ANN), including a communication bus interface, a memory, a controller, and at least one processing engine (PE). The communication bus interface is configured to receive a plurality of finetuned weights associated with the ANN, receive input data, and transmit output data. The memory is configured to store the plurality of finetuned weights, the input data and the output data. The PE is configured to receive the input data, execute an ANN model using a plurality of fixed weights associated with the ANN and the plurality of finetuned weights, and generate the output data. Each finetuned weight corresponds to a fixed weight.

Patent Agency Ranking