Cache content management
    1.
    发明授权

    公开(公告)号:US11256623B2

    公开(公告)日:2022-02-22

    申请号:US15427459

    申请日:2017-02-08

    Applicant: ARM Limited

    Abstract: Apparatus and a corresponding method of operating a hub device, and a target device, in a coherent interconnect system are presented. A cache pre-population request of a set of coherency protocol transactions in the system is received from a requesting master device specifying at least one data item and the hub device responds by cause a cache pre-population trigger of the set of coherency protocol transactions specifying the at least one data item to be transmitted to a target device. This trigger can cause the target device to request that the specified at least one data item is retrieved and brought into cache. Since the target device can therefore decide whether to respond to the trigger or not, it does not receive cached data unsolicited, simplifying its configuration, whilst still allowing some data to be pre-cached.

    Apparatus and method to schedule time-sensitive tasks

    公开(公告)号:US10817336B2

    公开(公告)日:2020-10-27

    申请号:US15194928

    申请日:2016-06-28

    Applicant: ARM LIMITED

    Abstract: There is provided an apparatus comprising scheduling circuitry, which selects a task as a selected task to be performed from a plurality of queued tasks, each having an associated priority, in dependence on the associated priority of each queued task. Escalating circuitry increases the associated priority of each of the plurality of queued tasks after a period of time. The plurality of queued tasks comprises a time-sensitive task having an associated deadline and in response to the associated deadline being reached, the scheduling circuitry selects the time-sensitive task as the selected task to be performed.

    Writing zero data
    5.
    发明授权

    公开(公告)号:US11188377B2

    公开(公告)日:2021-11-30

    申请号:US16592979

    申请日:2019-10-04

    Applicant: Arm Limited

    Abstract: Apparatuses, methods of operating apparatuses, interconnects for connecting apparatuses to one another, and methods of operating the interconnects are disclosed. A master apparatus can issue an individual all-zero-data write transaction specifying a data storage location to the interconnect, which conveys the individual all-zero-data write transaction to a target device which writes all-zero-data at the data storage location. No write data is conveyed with the individual all-zero-data write transaction, so that the individual all-zero-data write transaction may be used to clear the data storage location without adding to congestion of a write data channel in the interconnect.

    Forwarding responses to snoop requests

    公开(公告)号:US11159636B2

    公开(公告)日:2021-10-26

    申请号:US15427384

    申请日:2017-02-08

    Applicant: ARM Limited

    Abstract: A data processing apparatus is provided, which includes receiving circuitry to receive a snoop request in respect of requested data on behalf of a requesting node. The snoop request includes an indication as to whether forwarding is to occur. Transmitting circuitry transmits a response to the snoop request and cache circuitry caches at least one data value. When forwarding is to occur and the at least one data value includes the requested data, the response includes the requested data and the transmitting circuitry transmits the response to the requesting node.

    Interconnect circuitry and a method of operating such interconnect circuitry

    公开(公告)号:US10963409B2

    公开(公告)日:2021-03-30

    申请号:US15241461

    申请日:2016-08-19

    Applicant: ARM Limited

    Abstract: An interconnect circuit, and method of operation of such an interconnect circuit, are provided. The interconnect circuitry has a first interface for coupling to a master device and a second interface for coupling to a slave device. Transactions are performed between the master device and the slave device, where each transaction comprises or more transfers, and each transfer comprises a request and a response. A first connection path between the first interface and the second interface is provided that comprises a first plurality of pipeline stages. The first connection path forms a default path for propagation of the requests and responses of the transfers. A second connection path is also provided between the first interface and the second interface that comprises a second plurality of pipeline stages, where the second plurality is less than the first plurality. Path selection circuitry is then used to determine presence of a fast path condition. In the presence of the fast path condition, the path selection circuitry causes at least one of the request and the response for one or more transfers to be propagated via the second connection path. This can significantly reduce the latency associated with the handling of transfers within the interconnect circuitry, and hence improves the overall performance of the interconnect circuitry.

    Read transaction tracker lifetimes in a coherent interconnect system

    公开(公告)号:US10795820B2

    公开(公告)日:2020-10-06

    申请号:US15427435

    申请日:2017-02-08

    Applicant: ARM Limited

    Abstract: Apparatus and a corresponding method of operating the apparatus, in a coherent interconnect system comprising a requesting master device and a data-storing slave device, are provided. The apparatus maintains records of coherency protocol transactions received from the requesting master device whilst completion of the coherency protocol transactions are pending and is responsive to reception of a read transaction from the requesting master device for a data item stored in the data-storing slave device to issue a direct memory transfer request to the data-storing slave device. A read acknowledgement trigger is added to the direct memory transfer request and in response to reception of a read acknowledgement signal from the data-storing slave device a record created by reception of the read transaction is updated corresponding to completion of the direct memory transfer request. The lifetime that the apparatus needs to maintain the record is thus reduced, despite the read transaction being satisfied by a direct memory transfer. A corresponding data-storing slave device and method of operating the data-storing slave device are also provided.

    Cache maintenance operations in a data processing system

    公开(公告)号:US10783080B2

    公开(公告)日:2020-09-22

    申请号:US16173213

    申请日:2018-10-29

    Applicant: Arm Limited

    Abstract: An interconnect system and method of operating the system are disclosed. A master device has access to a cache and a slave device has an associated data storage device for long-term storage of data items. The master device can initiate a cache maintenance operation in the interconnect system with respect to a data item temporarily stored in the cache causing action to be taken by the slave device with respect to storage of the data item in the data storage device. For long latency operations the master device can issue a separated cache maintenance request specifying the data item and the slave device. In response an intermediate device signals an acknowledgment response indicating that it has taken on responsibility for completion of the cache maintenance operation and issues the separated cache maintenance request to the slave device. The slave device signals the acknowledgement response to the intermediate device and on completion of the cache maintenance operation with respect to the data item stored in the data storage device signals a completion response to the master device.

    Runtime configuration of a data processing system

    公开(公告)号:US10732854B2

    公开(公告)日:2020-08-04

    申请号:US15977153

    申请日:2018-05-11

    Applicant: Arm Limited

    Abstract: A data processing system and a method of runtime configuration of the data processing system are disclosed. The data processing system comprises a plurality of home nodes, and for a data store associated with a slave node in the data processing system, for each home node of the plurality of home nodes a modified size of the data store is determined. The modified size is based on a storage capacity of the data store and at least one additional property of the data processing system. A chosen home node of the plurality of home nodes is selected which satisfies a minimization criterion for the modified size, and the chosen home node is paired with the slave node.

Patent Agency Ranking