Cache for Storing Coherent and Non-Coherent Data

    公开(公告)号:US20220382679A1

    公开(公告)日:2022-12-01

    申请号:US17331806

    申请日:2021-05-27

    Applicant: Arm Limited

    Abstract: The present disclosure advantageously provides a system cache and a method for storing coherent data and non-coherent data in a system cache. A transaction is received from a source in a system, the transaction including at least a memory address, the source having a location in a coherent domain or a non-coherent domain of the system, the coherent domain including shareable data and the non-coherent domain including non-shareable data. Whether the memory address is stored in a cache line is determined, and, when the memory address is not determined to be stored in a cache line, a cache line is allocated to the transaction including setting a state bit of the allocated cache line based on the source location to indicate whether shareable or non-shareable data is stored in the allocated cache line, and the transaction is processed.

    SNOOP FILTER WITH IMPRECISE ENCODING

    公开(公告)号:US20220308999A1

    公开(公告)日:2022-09-29

    申请号:US17215435

    申请日:2021-03-29

    Applicant: Arm Limited

    Abstract: An apparatus comprises snoop filter storage circuitry to store snoop filter entries corresponding to addresses and comprising sharer information. Control circuitry selects which sharers, among a plurality of sharers capable of holding cached data, should be issued with snoop requests corresponding to a target address, based on the sharer information of the snoop filter entry corresponding to the target address. The control circuitry is capable of setting a given snoop filter entry corresponding to a given address to an imprecise encoding in which the sharer information provides an imprecise description of which sharers hold cached data corresponding to the given address, and the given snoop filter entry comprises at least one sharer count value indicative of a number of sharers holding cached data corresponding to said address.

    Interconnect resource allocation
    44.
    发明授权

    公开(公告)号:US11431649B1

    公开(公告)日:2022-08-30

    申请号:US17214028

    申请日:2021-03-26

    Applicant: Arm Limited

    Abstract: The present disclosure advantageously provides a method and system for allocating shared resources for an interconnect. A request is received at a home node from a request node over an interconnect, where the request represents a beginning of a transaction with a resource in communication with the home node, and the request has a traffic class defined by a user-configurable mapping based on one or more transaction attributes. The traffic class of the request is determined. A resource capability for the traffic class is determined based on user configurable traffic class-based resource capability data. Whether a home node transaction table has an available entry for the request is determined based on the resource capability for the traffic class.

    System, method and apparatus for executing instructions

    公开(公告)号:US11409530B2

    公开(公告)日:2022-08-09

    申请号:US16103995

    申请日:2018-08-16

    Applicant: Arm Limited

    Abstract: A system, apparatus and method for ordering a sequence of processing transactions. The method includes accessing, from a memory, a program sequence of operations that are to be executed. Instructions are received, some of them having an identifier, or mnemonic, that is used to distinguish those identified operations from other operations that do not have an identifier, or mnemonic. The mnemonic indicates a distribution of the execution of the program sequence of operations. The program sequence of operations is grouped based on the mnemonic such that certain operations are separated from other operations.

    Apparatus and method for handling ordered transactions

    公开(公告)号:US11256646B2

    公开(公告)日:2022-02-22

    申请号:US16685082

    申请日:2019-11-15

    Applicant: Arm Limited

    Abstract: An apparatus and method are provided for handling ordered transactions. The apparatus has a plurality of completer elements to process transactions, a requester element to issue a sequence of ordered transactions, and an interconnect providing, for each completer element, a communication channel between that completer element and the requester element for transfer of signals between that completer element and the requester element in either direction. A given completer element that is processing a given transaction in the sequence is arranged to issue a response signal to the requester element over its associated communication channel that comprises an ordered channel indication to identify whether the associated communication channel has an ordered channel property. The ordered channel property guarantees that processing of transactions issued by the requester element over the associated communication channel in a given order will be completed by the given completer element in the same given order. The requester element is then responsive to the ordered channel indication to control timing of issuance from the requester element of at least one signal relating to one or more transactions after the given transaction in the sequence. By such an approach, the ordering flow adopted for ordered transactions can be varied by the requester element in dependence on the presence or absence of an ordered channel, whilst enabling interconnect-agnostic requester element designs to be utilised.

    Message protocol for a data processing system

    公开(公告)号:US11074206B1

    公开(公告)日:2021-07-27

    申请号:US17036225

    申请日:2020-09-29

    Applicant: Arm Limited

    Abstract: The present disclosure advantageously provides a method and system for transferring data over at least one interconnect. A request node, coupled to an interconnect, receives a first write burst from a first device over a first connection, divides the first write burst into an ordered sequence of smaller write requests based on the size of the first write burst, and sends the ordered sequence of write requests to a home node coupled to the interconnect. The home node generates an ordered sequence of write transactions based on the ordered sequence of write requests, and sends the ordered sequence of write transactions to a write combiner coupled to the home node. The write combiner combines the ordered sequence of write transactions into a second write burst that is the same size as the first write burst, and sends the second write burst to a second device over a second connection.

    Apparatus and method for handling cache maintenance operations

    公开(公告)号:US10970225B1

    公开(公告)日:2021-04-06

    申请号:US16591827

    申请日:2019-10-03

    Applicant: Arm Limited

    Abstract: An apparatus and method are provided for handling cache maintenance operations. The apparatus has a plurality of requester elements for issuing requests and at least one completer element for processing such requests. A cache hierarchy is provided having a plurality of levels of cache to store cached copies of data associated with addresses in memory. A requester element may be arranged to issue a cache maintenance operation request specifying a memory address range in order to cause a block of data associated with the specified memory address range to be pushed through at least one level of the cache hierarchy to a determined visibility point in order to make that block of data visible to one or more other requester elements. The given requester element may be arranged to detect when there is a need to issue a write request prior to the cache maintenance operation request in order to cause a write operation to be performed in respect of data within the specified memory address range, and in that event to generate a combined write and cache maintenance operation request to be issued instead of the write request and a subsequent cache maintenance operation request. A recipient completer element that receives the combined write and cache maintenance operation request may then be arranged to initiate processing of the cache maintenance operation required by the combined write and cache maintenance operation request without waiting for the write operation to complete. This can significantly reduce latency in the handling of cache maintenance operations, and can provide for reduced bandwidth utilisation.

    High-performance streaming of ordered write stashes to enable optimized data sharing between I/O masters and CPUs

    公开(公告)号:US10452593B1

    公开(公告)日:2019-10-22

    申请号:US16027490

    申请日:2018-07-05

    Applicant: Arm Limited

    Abstract: A data processing network and method of operation thereof are provided for efficient transfer of ordered data from a Request Node to a target node. The Request Node send write requests to a Home Node and the Home Node responds to a first write request when resources have been allocated the Home Node. The Request Node then sends the data to the written. The Home Node also responds with a completion message when a coherency action has been performed at the Home Node. The Request Node acknowledges receipt of the completion message with a completion acknowledgement message that is not sent until completion messages have been received for all write requests older than the first write request for the ordered data, thereby maintaining data order. Following receipt of the completion acknowledgement for the first write request, the Home Node sends the data to be written to the target node.

Patent Agency Ranking