-
1.
公开(公告)号:US20240046065A1
公开(公告)日:2024-02-08
申请号:US17817142
申请日:2022-08-03
Applicant: Arm Limited
Inventor: Hokchhay Tann , Ramon Matas Navarro , Igor Fedorov , Chuteng Zhou , Paul Nicholas Whatmough , Matthew Mattina
IPC: G06N3/04
CPC classification number: G06N3/04
Abstract: Example methods, apparatuses, and/or articles of manufacture are disclosed that may be implemented, in whole or in part, using one or more computing devices to determine options for decisions in connection with design features of a computing device. In a particular implementation, design options for two or more design decisions of neural network processing device may be identified based, at least in part, on combination of a definition of available computing resources and one or more predefined performance constraints.
-
公开(公告)号:US11886987B2
公开(公告)日:2024-01-30
申请号:US16451205
申请日:2019-06-25
Applicant: Arm Limited
Inventor: Shidhartha Das , Matthew Mattina , Glen Arnold Rosendale , Fernando Garcia Redondo
Abstract: A multiply-accumulate method and architecture are disclosed. The architecture includes a plurality of networks of non-volatile memory elements arranged in tiled columns. Logic digitally modulates the equivalent conductance of individual networks among the plurality of networks to map the equivalent conductance of each individual network to a single weight within the neural network. A first partial selection of weights within the neural network is mapped into the equivalent conductances of the networks in the columns to enable the computation of multiply-and-accumulate operations by mixed-signal computation. The logic updates the mappings to select a second partial selection of weights to compute additional multiply-and-accumulate operations and repeats the mapping and computation operations until all computations for the neural network are completed.
-
公开(公告)号:US11693796B2
公开(公告)日:2023-07-04
申请号:US17334960
申请日:2021-05-31
Applicant: Arm Limited
Inventor: Paul Nicholas Whatmough , Zhi-Gang Liu , Supreet Jeloka , Saurabh Pijuskumar Sinha , Matthew Mattina
CPC classification number: G06F13/1668 , G06F13/4004 , G06F7/5443 , G06F15/8046 , G06N3/063
Abstract: Various implementations described herein are directed to a device having a multi-layered logic structure with a first logic layer and a second logic layer arranged vertically in a stacked configuration. The device may have a memory array that provides data, and also, the device may have an inter-layer data bus that vertically couples the memory array to the multi-layered logic structure. The inter-layer data bus may provide multiple data paths to the first logic layer and the second logic layer for reuse of the data provided by the memory array.
-
公开(公告)号:US20230076138A1
公开(公告)日:2023-03-09
申请号:US17470470
申请日:2021-09-09
Applicant: Arm Limited
Inventor: Paul Nicholas Whatmough , Zhi-Gang Liu , Matthew Mattina
Abstract: A matrix multiplication system and method are provided. The system includes a memory that stores one or more weight tensors, a processor and a matrix multiply accelerator (MMA). The processor converts each weight tensor into an encoded block set that is stored in the memory. Each encoded block set includes a number of encoded blocks, and each encoded block includes a data field and an index field. The MMA converts each encoded block set into a reconstructed weight tensor, and convolves each reconstructed weight tensor and an input data tensor to generate an output data matrix.
-
公开(公告)号:US20230042271A1
公开(公告)日:2023-02-09
申请号:US17394048
申请日:2021-08-04
Applicant: Arm Limited
Inventor: Igor Fedorov , Ramon Matas Navarro , Chuteng Zhou , Hokchhay Tann , Paul Nicholas Whatmough , Matthew Mattina
Abstract: Example methods, apparatuses, and/or articles of manufacture are disclosed that may be implemented, in whole or in part, using one or more computing devices to select options for decisions in connection with design features of a computing device. In a particular implementation, design options for two or more design decisions of neural network processing device may be selected based, at least in part, on combination of function values that are computed based, at least in part, on a tensor expressing sample neural network weights.
-
公开(公告)号:US20210287078A1
公开(公告)日:2021-09-16
申请号:US16818302
申请日:2020-03-13
Applicant: Arm Limited
Inventor: Zhi-Gang Liu , Matthew Mattina , John Fremont Brown, III
Abstract: The present disclosure advantageously provides an Optical Hardware Accelerator (OHA) for an Artificial Neural Network (ANN) that includes a communication bus interface, a memory, a controller, and an optical computing engine (OCE). The OCE is configured to execute an ANN model with ANN weights. Each ANN weight includes a quantized phase shift value θi and a phase shift value ϕi. The OCE includes a digital-to-optical (D/O) converter configured to generate input optical signals based on the input data, an optical neural network (ONN) configured to generate output optical signals based on the input optical signals, and an optical-to-digital (O/D) converter configured to generate the output data based on the output optical signals. The ONN includes a plurality of optical units (OUs), and each OU includes an optical multiply and accumulate (OMAC) module.
-
公开(公告)号:US11501151B2
公开(公告)日:2022-11-15
申请号:US16885704
申请日:2020-05-28
Applicant: Arm Limited
Inventor: Paul Nicholas Whatmough , Zhi-Gang Liu , Matthew Mattina
Abstract: The present disclosure advantageously provides a pipelined accumulator that includes a data selector configured to receive a sequence of operands to be summed, an input register coupled to the data selector, an output register, coupled to the data selector, configured to store a sequence of partial sums and output a final sum, and a multi-stage add module coupled to the input register and the output register. The multi-stage add module is configured to store a sequence of partial sums and a final sum in a redundant format, and perform back-to-back accumulation into the output register.
-
公开(公告)号:US20220179658A1
公开(公告)日:2022-06-09
申请号:US17674503
申请日:2022-02-17
Applicant: Arm Limited
Inventor: Matthew Mattina , Shidhartha Das , Glen Arnold Rosendale , Fernando Garcia Redondo
Abstract: A method and apparatus for performing refactored multiply-and-accumulate operations is provided. A summing array includes a plurality of non-volatile memory elements arranged in columns. Each non-volatile memory element in the summing array is programmed to a high resistance state or a low resistance state based on weights of a neural network. The summing array is configured to generate a summed signal for each column based, at least in part, on a plurality of input signals. A multiplying array is coupled to the summing array, and includes a plurality of non-volatile memory elements. Each non-volatile memory element in the multiplying array is programmed to a different conductance level based on the weights of the neural network. The multiplying array is configured to generate an output signal based, at least in part, on the summed signals from the summing array.
-
公开(公告)号:US20210097130A1
公开(公告)日:2021-04-01
申请号:US16585265
申请日:2019-09-27
Applicant: Arm Limited
Inventor: Zhi-Gang Liu , Matthew Mattina , Paul Nicholas Whatmough
Abstract: The present disclosure advantageously provides a system method for efficiently multiplying matrices with elements that have a value of 0. A bitmap is generated for each matrix. Each bitmap includes a bit position for each matrix element. The value of each bit is set to 0 when the value of the corresponding matrix element is 0, and to 1 when the value of the corresponding matrix element is not 0. Each matrix is compressed into a compressed matrix, which will have fewer elements with a value of 0 than the original matrix. Each bitmap is then adjusted based on the corresponding compressed matrix. The compressed matrices are then multiplied to generate an output matrix. For each element i,j in the output matrix, a dot product of the ith row of the first compressed matrix and the jth column of the second compressed matrix is calculated based on the bitmaps.
-
公开(公告)号:US20210089888A1
公开(公告)日:2021-03-25
申请号:US16836110
申请日:2020-03-31
Applicant: Arm Limited
Inventor: Dibakar Gope , Jesse Garrett Beu , Paul Nicholas Whatmough , Matthew Mattina
Abstract: The present disclosure advantageously provides a system including a memory, a processor, and a circuitry to execute one or more mixed precision layers of an artificial neural network (ANN), each mixed precision layer including high-precision weight filters and low precision weight filters. The circuitry is configured to perform one or more calculations on an input feature map having a plurality of input channels (cin) using the high precision weight filters to create a high precision output feature map having a first number of output channels (k), perform one or more calculations on the input feature map using the low precision weight filters to create a low precision output feature map having a second number of output channels (cout−k), and concatenate the high precision output feature map and the low precision output feature map to create a unified output feature map having a plurality of output channels (cout).
-
-
-
-
-
-
-
-
-