TRANSFER PROTOCOL IN A DATA PROCESSING NETWORK

    公开(公告)号:US20190342034A1

    公开(公告)日:2019-11-07

    申请号:US16027864

    申请日:2018-07-05

    Applicant: Arm Limited

    Abstract: In a data processing network comprising one or more Request Nodes and a Home Node coupled via a coherent interconnect, a Request Node requests data from the Home Node. The requested data is sent, via the interconnect, to the Request Node in a plurality of data beats, where a first data beat of the plurality of data beats is received at a first time and a last data beat is received at a second time. Responsive to receiving the first data beat, the Request Node sends an acknowledgement message to the Home Node. Upon receipt of the acknowledgement message, the Home Node frees resources allocated to the read transaction. In addition, the Home Node is configured to allow snoop requests for the data to the Request Node to be sent to the Request Node before all beats of the requested data have been received by the Request Node.

    APPARATUS AND METHOD FOR HANDLING CACHE MAINTENANCE OPERATIONS

    公开(公告)号:US20210103525A1

    公开(公告)日:2021-04-08

    申请号:US16591827

    申请日:2019-10-03

    Applicant: Arm Limited

    Abstract: An apparatus and method are provided for handling cache maintenance operations. The apparatus has a plurality of requester elements for issuing requests and at least one completer element for processing such requests. A cache hierarchy is provided having a plurality of levels of cache to store cached copies of data associated with addresses in memory. A requester element may be arranged to issue a cache maintenance operation request specifying a memory address range in order to cause a block of data associated with the specified memory address range to be pushed through at least one level of the cache hierarchy to a determined visibility point in order to make that block of data visible to one or more other requester elements. The given requester element may be arranged to detect when there is a need to issue a write request prior to the cache maintenance operation request in order to cause a write operation to be performed in respect of data within the specified memory address range, and in that event to generate a combined write and cache maintenance operation request to be issued instead of the write request and a subsequent cache maintenance operation request. A recipient completer element that receives the combined write and cache maintenance operation request may then be arranged to initiate processing of the cache maintenance operation required by the combined write and cache maintenance operation request without waiting for the write operation to complete. This can significantly reduce latency in the handling of cache maintenance operations, and can provide for reduced bandwidth utilisation.

    ACCESS CONTROL
    6.
    发明申请
    ACCESS CONTROL 审中-公开

    公开(公告)号:US20180357178A1

    公开(公告)日:2018-12-13

    申请号:US15620017

    申请日:2017-06-12

    Applicant: ARM LIMITED

    CPC classification number: G06F12/1036 G06F12/0802 G06F12/1425 G06F13/1668

    Abstract: Access control circuitry comprises: a detector to detect a memory address translation between a virtual memory address in a virtual memory address space and a physical memory address in a physical memory address space, provided in response to a translation request by further circuitry; an address translation memory, to store data representing a set of physical memory addresses previously provided to the further circuitry in response to translation requests by the further circuitry; an interface to receive a physical memory address from the further circuitry for a memory access by the further circuitry; a comparator to compare a physical memory address received from the further circuitry with the set of physical addresses stored by the address translation memory, and to permit access, by the further circuitry, to a physical address included in the set of one or more physical memory addresses.

    BARRIER TRANSACTIONS IN INTERCONNECTS
    8.
    发明申请
    BARRIER TRANSACTIONS IN INTERCONNECTS 有权
    互连中的障碍交易

    公开(公告)号:US20140040516A1

    公开(公告)日:2014-02-06

    申请号:US13960128

    申请日:2013-08-06

    Applicant: ARM LIMITED

    CPC classification number: G06F13/362 G06F13/1621 G06F13/1689 G06F13/364

    Abstract: Interconnect circuitry is configured to provide data routes via which at least one initiator device may access at least one recipient device. The circuitry including: at least one input for receiving transaction requests from at least one initiator device; at least one output for outputting transaction requests to the at least one recipient device; and at least one path for transmitting transaction requests between at least one input and at least one output. Also includes is control circuitry for routing the received transaction requests from at least one input to at least one output and responds to a barrier transaction request to maintain an ordering of at least some transaction requests with respect to said barrier transaction request within a stream of transaction requests passing along one of said at least one paths. Barrier transaction requests include an indicator of transaction requests whose ordering is to be maintained.

    Abstract translation: 互连电路被配置为提供数据路由,至少一个发起者设备可经由该路由访问至少一个接收者设备。 所述电路包括:用于从至少一个发起者设备接收交易请求的至少一个输入; 用于向所述至少一个接收设备输出交易请求的至少一个输出; 以及用于在至少一个输入和至少一个输出之间传送事务请求的至少一个路径。 还包括用于将接收到的交易请求从至少一个输入路由到至少一个输出的控制电路,并且响应于屏障事务请求以维持关于业务流内的所述屏障事务请求的至少一些交易请求的排序 沿着所述至少一条路径中的一条通过的请求。 阻塞事务请求包括要保持其顺序的事务请求的指示符。

    TECHNIQUE FOR HANDLING PROTOCOL CONVERSION

    公开(公告)号:US20220283972A1

    公开(公告)日:2022-09-08

    申请号:US17189781

    申请日:2021-03-02

    Applicant: Arm Limited

    Abstract: An apparatus and method are provided for handling protocol conversion. The apparatus has interconnect circuitry for routing messages between components coupled to the interconnect circuitry in a manner that conforms to a first communication protocol. Protocol conversion circuitry is coupled between the interconnect circuitry and an external communication path, for converting messages between the first communication protocol and a second communication protocol that has a layered architecture comprising multiple layers. The protocol conversion circuitry has a gateway component forming one of the components coupled to the interconnect circuitry, and a controller coupled with the gateway component and used to control connection with the external communication path. For a selected layer of the multiple layers, the protocol conversion circuitry provides, within the gateway component, upper selected layer circuitry to implement a first portion of functionality of the selected layer, where the first portion comprises at least protocol dependent functionality of the selected layer. It also provides, within the controller, lower selected layer circuitry to implement a remaining portion of the functionality of the selected layer, the remaining portion comprising only protocol independent functionality of the selected layer.

    CACHE MAINTENANCE OPERATIONS IN A DATA PROCESSING SYSTEM

    公开(公告)号:US20200133865A1

    公开(公告)日:2020-04-30

    申请号:US16173213

    申请日:2018-10-29

    Applicant: Arm Limited

    Abstract: An interconnect system and method of operating the system are disclosed. A master device has access to a cache and a slave device has an associated data storage device for long-term storage of data items. The master device can initiate a cache maintenance operation in the interconnect system with respect to a data item temporarily stored in the cache causing action to be taken by the slave device with respect to storage of the data item in the data storage device. For long latency operations the master device can issue a separated cache maintenance request specifying the data item and the slave device. In response an intermediate device signals an acknowledgment response indicating that it has taken on responsibility for completion of the cache maintenance operation and issues the separated cache maintenance request to the slave device. The slave device signals the acknowledgement response to the intermediate device and on completion of the cache maintenance operation with respect to the data item stored in the data storage device signals a completion response to the master device.

Patent Agency Ranking