SNOOP FILTER FOR CACHE COHERENCY IN A DATA PROCESSING SYSTEM

    公开(公告)号:US20190079868A1

    公开(公告)日:2019-03-14

    申请号:US16189070

    申请日:2018-11-13

    Applicant: Arm Limited

    Abstract: A data processing system, having two or more of processors that access a shared data resource, and method of operation thereof. Data stored in a local cache is marked as being in a ‘UniqueDirty’, ‘SharedDirty’, ‘UniqueClean’, ‘SharedClean’ or ‘Invalid’ state. A snoop filter monitors access by the processors to the shared data resource, and includes snoop filter control logic and a snoop filter cache configured to maintain cache coherency. The snoop filter cache does not identify any local cache that stores the block of data in a ‘SharedDirty’ state, resulting in a smaller snoop filter cache size and simple snoop control logic. The data processing system by be defined by instructions of a Hardware Description Language.

    PERIPHERAL COMPONENT HANDLING OF MEMORY READ REQUESTS

    公开(公告)号:US20230267081A1

    公开(公告)日:2023-08-24

    申请号:US17678174

    申请日:2022-02-23

    Applicant: Arm Limited

    Abstract: Peripheral components, data processing systems and methods of operating such peripheral components and data processing systems are disclosed. The systems comprise an interconnect comprising a system cache, a peripheral component coupled to the interconnect, and a memory coupled to the interconnect. The peripheral component has a memory access request queue for queuing memory access requests in a receipt order. Memory access requests are issued to the interconnect in the receipt order. A memory read request is not issued to the interconnect until a completion response for all older memory write requests has been received from the interconnect. The peripheral component is responsive to receipt of a memory read request to issue a memory read prefetch request comprising a physical address to the interconnect and the interconnect is responsive to the memory read prefetch request to cause data associated with the physical address in the memory to be cached in the system cache.

    DATA ELISION
    25.
    发明公开
    DATA ELISION 审中-公开

    公开(公告)号:US20230236992A1

    公开(公告)日:2023-07-27

    申请号:US17580920

    申请日:2022-01-21

    Applicant: Arm Limited

    CPC classification number: G06F13/16

    Abstract: In response to determining circuitry determining that a portion of data to be sent to a recipient over an interconnect has a predetermined value, data sending circuitry performs data elision to: omit sending at least one data FLIT corresponding to the portion of data having the predetermined value; and send a data-elision-specifying FLIT specifying data-elision information indicating to the recipient that sending of the at least one data FLIT has been omitted and that the recipient can proceed assuming the portion of data has the predetermined value. The data-elision-specifying FLIT is a FLIT other than a write request FLIT for initiating a memory write transaction sequence. This helps to conserve data FLIT bandwidth for other data not having the predetermined value.

    CIRCUITRY AND METHOD
    26.
    发明申请

    公开(公告)号:US20210306414A1

    公开(公告)日:2021-09-30

    申请号:US16828207

    申请日:2020-03-24

    Applicant: Arm Limited

    Abstract: Circuitry comprises a set of data handling nodes comprising: two or more master nodes each having respective storage circuitry to hold copies of data items from a main memory, each copy of a data item being associated with indicator information to indicate a coherency state of the respective copy, the indicator information being configured to indicate at least whether that copy has been updated more recently than the data item held by the main memory; a home node to serialise data access operations and to control coherency amongst data items held by the set of data handling nodes so that data written to a memory address is consistent with data read from that memory address in response to a subsequent access request; and one or more slave nodes including the main memory; in which: a requesting node of the set of data handling nodes is configured to communicate a conditional request to a target node of the set of data handling nodes in respect of a copy of a given data item at a given memory address, the conditional request being associated with an execution condition and being a request that the copy of the given data item is written to a destination node of the data handling nodes; and the target node is configured, in response to the conditional request: (i) when the outcome of the execution condition is successful, to write the data item to the destination node and to communicate a completion-success indicator to the requesting node; and (ii) when the outcome of the execution condition is a failure, to communicate a completion-failure indicator to the requesting node.

    WRITING ZERO DATA
    28.
    发明申请

    公开(公告)号:US20210103460A1

    公开(公告)日:2021-04-08

    申请号:US16592979

    申请日:2019-10-04

    Applicant: Arm Limited

    Abstract: Apparatuses, methods of operating apparatuses, interconnects for connecting apparatuses to one another, and methods of operating the interconnects are disclosed. A master apparatus can issue an individual all-zero-data write transaction specifying a data storage location to the interconnect, which conveys the individual all-zero-data write transaction to a target device which writes all-zero-data at the data storage location. No write data is conveyed with the individual all-zero-data write transaction, so that the individual all-zero-data write transaction may be used to clear the data storage location without adding to congestion of a write data channel in the interconnect.

    SYSTEM, METHOD AND APPARATUS FOR EXECUTING INSTRUCTIONS

    公开(公告)号:US20200057640A1

    公开(公告)日:2020-02-20

    申请号:US16103995

    申请日:2018-08-16

    Applicant: Arm Limited

    Abstract: A system, apparatus and method for ordering a sequence of processing transactions. The method includes accessing, from a memory, a program sequence of operations that are to be executed. Instructions are received, some of them having an identifier, or mnemonic, that is used to distinguish those identified operations from other operations that do not have an identifier, or mnemonic. The mnemonic indicates a distribution of the execution of the program sequence of operations. The program sequence of operations is grouped based on the mnemonic such that certain operations are separated from other operations.

Patent Agency Ranking