DATA PROCESSING SYSTEMS
    1.
    发明申请

    公开(公告)号:US20250138897A1

    公开(公告)日:2025-05-01

    申请号:US18499014

    申请日:2023-10-31

    Applicant: Arm Limited

    Abstract: In a data processing system, a command stream provided to a processing resource to cause the processing resource to perform a processing task for an application executing on a host processor comprises a sequence of commands for execution by the processing resource to cause the processing resource to perform the processing operations for the processing task and one or more data save indicators that indicate data that is to be saved. In response to the processing resource receiving a request to suspend processing of the processing task, data indicated by one of the one or more data save indicators in the command stream is stored in memory.

    Specializing neural networks for heterogeneous systems

    公开(公告)号:US11620516B2

    公开(公告)日:2023-04-04

    申请号:US16724849

    申请日:2019-12-23

    Applicant: Arm Limited

    Abstract: The present disclosure advantageously provides a heterogenous system, and a method for generating an artificial neural network (ANN) for a heterogenous system. The heterogenous system includes a plurality of processing units coupled to a memory configured to store an input volume. The plurality of processing units includes first and second processing units. The first processing unit includes a first processor and is configured to execute a first ANN, and the second processing unit includes a second processor and is configured to execute a second ANN. The first and second ANNs respectively include an input layer, at least one processor-optimized hidden layer and an output layer. The second ANN hidden layers are different than the first ANN hidden layers.

    Methods and apparatus for context switching

    公开(公告)号:US12288091B2

    公开(公告)日:2025-04-29

    申请号:US17474568

    申请日:2021-09-14

    Applicant: Arm Limited

    Abstract: Aspects of the present disclosure relate to apparatus comprising execution circuitry comprising at least one execution unit to execute program instructions, and control circuitry. The control circuitry receives a stream of processing instructions, and issues each received instruction to one of said at least one execution unit. Responsive to determining that a first type of context switch is to be performed from an initial context to a new context, issuing continues until a pre-emption point in the stream of processing instructions is reached. Responsive to reaching the pre-emption point, state information is stored, and the new context is switched to. Responsive to determining that a context switch is to be performed to return from the new context to the initial context, the processing status is restored from the state information, and the stream of processing instructions is continued.

    Specializing Neural Networks for Heterogeneous Systems

    公开(公告)号:US20210192337A1

    公开(公告)日:2021-06-24

    申请号:US16724849

    申请日:2019-12-23

    Applicant: Arm Limited

    Abstract: The present disclosure advantageously provides a heterogenous system, and a method for generating an artificial neural network (ANN) for a heterogenous system. The heterogenous system includes a plurality of processing units coupled to a memory configured to store an input volume. The plurality of processing units includes first and second processing units. The first processing unit includes a first processor and is configured to execute a first ANN, and the second processing unit includes a second processor and is configured to execute a second ANN. The first and second ANNs respectively include an input layer, at least one processor-optimized hidden layer and an output layer. The second ANN hidden layers are different than the first ANN hidden layers.

    Compression of neural network activation data

    公开(公告)号:US11948069B2

    公开(公告)日:2024-04-02

    申请号:US16518444

    申请日:2019-07-22

    Applicant: Arm Limited

    CPC classification number: G06N3/063 H03M7/70

    Abstract: A processor arranged to compress neural network activation data comprising an input module for obtaining neural network activation data. The processor also comprises a block creation module arranged to split the neural network activation data into a plurality of blocks; and a metadata generation module for generating metadata associated with at least one of the plurality of blocks. Based on the metadata generated a selection module selects a compression scheme for each of the plurality of blocks, and a compression module for applying the selected compression scheme to the corresponding block to produce compressed neural network activation data. An output module is also provided for outputting the compressed neural network activation data.

Patent Agency Ranking