-
公开(公告)号:US11586890B2
公开(公告)日:2023-02-21
申请号:US16720380
申请日:2019-12-19
Applicant: Arm Limited
Inventor: Paul Nicholas Whatmough , Chuteng Zhou
Abstract: The present disclosure advantageously provides a hardware accelerator for an artificial neural network (ANN), including a communication bus interface, a memory, a controller, and at least one processing engine (PE). The communication bus interface is configured to receive a plurality of finetuned weights associated with the ANN, receive input data, and transmit output data. The memory is configured to store the plurality of finetuned weights, the input data and the output data. The PE is configured to receive the input data, execute an ANN model using a plurality of fixed weights associated with the ANN and the plurality of finetuned weights, and generate the output data. Each finetuned weight corresponds to a fixed weight.
-
2.
公开(公告)号:US20240046065A1
公开(公告)日:2024-02-08
申请号:US17817142
申请日:2022-08-03
Applicant: Arm Limited
Inventor: Hokchhay Tann , Ramon Matas Navarro , Igor Fedorov , Chuteng Zhou , Paul Nicholas Whatmough , Matthew Mattina
IPC: G06N3/04
CPC classification number: G06N3/04
Abstract: Example methods, apparatuses, and/or articles of manufacture are disclosed that may be implemented, in whole or in part, using one or more computing devices to determine options for decisions in connection with design features of a computing device. In a particular implementation, design options for two or more design decisions of neural network processing device may be identified based, at least in part, on combination of a definition of available computing resources and one or more predefined performance constraints.
-
公开(公告)号:US20230042271A1
公开(公告)日:2023-02-09
申请号:US17394048
申请日:2021-08-04
Applicant: Arm Limited
Inventor: Igor Fedorov , Ramon Matas Navarro , Chuteng Zhou , Hokchhay Tann , Paul Nicholas Whatmough , Matthew Mattina
Abstract: Example methods, apparatuses, and/or articles of manufacture are disclosed that may be implemented, in whole or in part, using one or more computing devices to select options for decisions in connection with design features of a computing device. In a particular implementation, design options for two or more design decisions of neural network processing device may be selected based, at least in part, on combination of function values that are computed based, at least in part, on a tensor expressing sample neural network weights.
-
公开(公告)号:US20210192323A1
公开(公告)日:2021-06-24
申请号:US16720380
申请日:2019-12-19
Applicant: Arm Limited
Inventor: Paul Nicholas Whatmough , Chuteng Zhou
Abstract: The present disclosure advantageously provides a hardware accelerator for an artificial neural network (ANN), including a communication bus interface, a memory, a controller, and at least one processing engine (PE). The communication bus interface is configured to receive a plurality of finetuned weights associated with the ANN, receive input data, and transmit output data. The memory is configured to store the plurality of finetuned weights, the input data and the output data. The PE is configured to receive the input data, execute an ANN model using a plurality of fixed weights associated with the ANN and the plurality of finetuned weights, and generate the output data. Each finetuned weight corresponds to a fixed weight.
-
-
-