METHODS AND APPARATUS FOR COMMUNICATING BETWEEN NODE DEVICES

    公开(公告)号:US20230029897A1

    公开(公告)日:2023-02-02

    申请号:US17380112

    申请日:2021-07-20

    Applicant: Arm Limited

    Abstract: Aspects of the present disclosure relate to an interconnect comprising interfaces to communicate with respective requester and receiver node devices, and home nodes. Each home node is configured to: receive requests from one or more requester nodes, each request comprising a target address corresponding to a target receiver nodes; and transmit each said request to the corresponding target receiver node. Mapping circuitry is configured to: associate each of said plurality of home nodes with a given home node cluster; perform a first hashing of the target address of a given request, to determine a target cluster; perform a second hashing of the target address, to determine a target home node within said target cluster; and direct the given message, to the target home node.

    APPARATUS AND METHOD FOR HANDLING STASH REQUESTS

    公开(公告)号:US20220327057A1

    公开(公告)日:2022-10-13

    申请号:US17225614

    申请日:2021-04-08

    Applicant: Arm Limited

    Abstract: An apparatus and method for handling stash requests are described. The apparatus has a processing element with an associated storage structure that is used to store data for access by the processing element, and an interface for coupling the processing element to interconnect circuitry. Stash request handling circuitry is also provided that, in response to a stash request targeting the storage structure being received at the interface from the interconnect circuitry, causes a block of data associated with the stash request to be stored within the storage structure. The stash request identifies a given address that needs translating into a corresponding physical address in memory, and also identifies an address space key. Address translation circuitry is used to convert the given address identified by the stash request into the corresponding physical address by performing an address translation that is dependent on the address space key identified by the stash request. The stash request handling circuitry is then responsive to the corresponding physical address determined by the address translation circuitry to cause the block of data to be stored at a location within the storage structure associated with the physical address.

    TECHNIQUE FOR HANDLING PROTOCOL CONVERSION

    公开(公告)号:US20220283972A1

    公开(公告)日:2022-09-08

    申请号:US17189781

    申请日:2021-03-02

    Applicant: Arm Limited

    Abstract: An apparatus and method are provided for handling protocol conversion. The apparatus has interconnect circuitry for routing messages between components coupled to the interconnect circuitry in a manner that conforms to a first communication protocol. Protocol conversion circuitry is coupled between the interconnect circuitry and an external communication path, for converting messages between the first communication protocol and a second communication protocol that has a layered architecture comprising multiple layers. The protocol conversion circuitry has a gateway component forming one of the components coupled to the interconnect circuitry, and a controller coupled with the gateway component and used to control connection with the external communication path. For a selected layer of the multiple layers, the protocol conversion circuitry provides, within the gateway component, upper selected layer circuitry to implement a first portion of functionality of the selected layer, where the first portion comprises at least protocol dependent functionality of the selected layer. It also provides, within the controller, lower selected layer circuitry to implement a remaining portion of the functionality of the selected layer, the remaining portion comprising only protocol independent functionality of the selected layer.

    CACHE RETENTION DATA MANAGEMENT
    35.
    发明申请

    公开(公告)号:US20200174947A1

    公开(公告)日:2020-06-04

    申请号:US16327501

    申请日:2016-10-19

    Applicant: ARM LIMITED

    Abstract: A data processing system (2) incorporates a first exclusive cache memory (8, 10) and a second exclusive cache memory (14). A snoop filter (18) located together with the second exclusive cache memory on one side of the communication interface (12) serves to track entries within the first exclusive cache memory. The snoop filter includes retention data storage circuitry to store retention data for controlling retention of cache entries within the second exclusive cache memory. Retention data transfer circuitry (20) serves to transfer the retention data to and from the retention data storage circuitry within the snoop filter and the second cache memory as the cache entries concerned are transferred between the second exclusive cache memory and the first exclusive cache memory.

    CACHE MAINTENANCE OPERATIONS IN A DATA PROCESSING SYSTEM

    公开(公告)号:US20200133865A1

    公开(公告)日:2020-04-30

    申请号:US16173213

    申请日:2018-10-29

    Applicant: Arm Limited

    Abstract: An interconnect system and method of operating the system are disclosed. A master device has access to a cache and a slave device has an associated data storage device for long-term storage of data items. The master device can initiate a cache maintenance operation in the interconnect system with respect to a data item temporarily stored in the cache causing action to be taken by the slave device with respect to storage of the data item in the data storage device. For long latency operations the master device can issue a separated cache maintenance request specifying the data item and the slave device. In response an intermediate device signals an acknowledgment response indicating that it has taken on responsibility for completion of the cache maintenance operation and issues the separated cache maintenance request to the slave device. The slave device signals the acknowledgement response to the intermediate device and on completion of the cache maintenance operation with respect to the data item stored in the data storage device signals a completion response to the master device.

    HIGH-PERFORMANCE STREAMING OF ORDERED WRITE STASHES TO ENABLE OPTIMIZED DATA SHARING BETWEEN I/O MASTERS AND CPUS

    公开(公告)号:US20190340147A1

    公开(公告)日:2019-11-07

    申请号:US16027490

    申请日:2018-07-05

    Applicant: Arm Limited

    Abstract: A data processing network and method of operation thereof are provided for efficient transfer of ordered data from a Request Node to a target node. The Request Node send write requests to a Home Node and the Home Node responds to a first write request when resources have been allocated the Home Node. The Request Node then sends the data to the written. The Home Node also responds with a completion message when a coherency action has been performed at the Home Node. The Request Node acknowledges receipt of the completion message with a completion acknowledgement message that is not sent until completion messages have been received for all write requests older than the first write request for the ordered data, thereby maintaining data order. Following receipt of the completion acknowledgement for the first write request, the Home Node sends the data to be written to the target node.

    READ TRANSACTION TRACKER LIFETIMES IN A COHERENT INTERCONNECT SYSTEM

    公开(公告)号:US20180225206A1

    公开(公告)日:2018-08-09

    申请号:US15427435

    申请日:2017-02-08

    Applicant: ARM Limited

    Abstract: Apparatus and a corresponding method of operating the apparatus, in a coherent interconnect system comprising a requesting master device and a data-storing slave device, are provided. The apparatus maintains records of coherency protocol transactions received from the requesting master device whilst completion of the coherency protocol transactions are pending and is responsive to reception of a read transaction from the requesting master device for a data item stored in the data-storing slave device to issue a direct memory transfer request to the data-storing slave device. A read acknowledgement trigger is added to the direct memory transfer request and in response to reception of a read acknowledgement signal from the data-storing slave device a record created by reception of the read transaction is updated corresponding to completion of the direct memory transfer request. The lifetime that the apparatus needs to maintain the record is thus reduced, despite the read transaction being satisfied by a direct memory transfer. A corresponding data-storing slave device and method of operating the data-storing slave device are also provided.

    PROGRESSIVE FINE TO COARSE GRAIN SNOOP FILTER

    公开(公告)号:US20180004663A1

    公开(公告)日:2018-01-04

    申请号:US15196266

    申请日:2016-06-29

    Applicant: ARM Limited

    Abstract: A data processing system includes a snoop filter organized as a number of lines, each storing an address tag associated with the address of data stored in one or more caches of the system, a coherency state of the data, and presence data. A snoop controller sends snoop messages in response to data access requests. The presence data is configurable in a first format, in which the value of a bit in the presence data is indicative of a subset of the nodes for which at least one node in the subset has a copy of the data in its local cache, and in a second format, in which the presence data comprises a unique identifier of a node having a copy of the data in its local cache. The snoop controller sends snoop messages to the nodes indicated by the presence data.

Patent Agency Ranking