Eviction control for an address translation cache

    公开(公告)号:US10102143B2

    公开(公告)日:2018-10-16

    申请号:US15294031

    申请日:2016-10-14

    Applicant: ARM LIMITED

    Abstract: A data processing system 2 includes an address translation cache 12 to store a plurality of address translation entries. Eviction control circuitry 10 selects a victim entry for eviction from address translation cache 12 using an eviction control parameter. The address translation cache 12 can store multiple different types of entry corresponding to respective different levels of address translation within a multiple-level page table walk. The different types of entry have different eviction control parameters assigned at the time of allocation. Eviction from the address translation cache is dependent upon the entry type, as well as the subsequent accesses to the entry concerned and the other entries within the address translation cache.

    Cache content management
    12.
    发明授权

    公开(公告)号:US11256623B2

    公开(公告)日:2022-02-22

    申请号:US15427459

    申请日:2017-02-08

    Applicant: ARM Limited

    Abstract: Apparatus and a corresponding method of operating a hub device, and a target device, in a coherent interconnect system are presented. A cache pre-population request of a set of coherency protocol transactions in the system is received from a requesting master device specifying at least one data item and the hub device responds by cause a cache pre-population trigger of the set of coherency protocol transactions specifying the at least one data item to be transmitted to a target device. This trigger can cause the target device to request that the specified at least one data item is retrieved and brought into cache. Since the target device can therefore decide whether to respond to the trigger or not, it does not receive cached data unsolicited, simplifying its configuration, whilst still allowing some data to be pre-cached.

    Operation cache compression
    13.
    发明授权

    公开(公告)号:US11237974B2

    公开(公告)日:2022-02-01

    申请号:US16552001

    申请日:2019-08-27

    Applicant: Arm Limited

    Abstract: A data processing apparatus is provided. The data processing apparatus includes fetch circuitry to fetch instructions from storage circuitry. Decode circuitry decodes each of the instructions into one or more operations and provides the one or more operations to one or more execution units. The decode circuitry is adapted to decode at least one of the instructions into a plurality of operations. Cache circuitry caches the one or more operations and at least one entry of the cache circuitry is a compressed entry that represents the plurality of operations.

    Apparatus and method for speculative execution of instructions

    公开(公告)号:US11003454B2

    公开(公告)日:2021-05-11

    申请号:US16514124

    申请日:2019-07-17

    Applicant: Arm Limited

    Abstract: Apparatuses for data processing and methods of data processing are provided. A data processing apparatus performs data processing operations in response to a sequence of instructions including performing speculative execution of at least some of the sequence of instructions. In response to a branch instruction the data processing apparatus predicts whether or not the branch is taken or not taken further speculative instruction execution is based on that prediction. A path speculation cost is calculated in dependence on a number of recently flushed instructions and a rate at which speculatively executed instructions are issued may be modified based on the path speculation cost.

    Apparatus and method for processing an ownership upgrade request for cached data that is issued in relation to a conditional store operation

    公开(公告)号:US10761987B2

    公开(公告)日:2020-09-01

    申请号:US16202171

    申请日:2018-11-28

    Applicant: Arm Limited

    Abstract: An apparatus and method are provided for processing ownership upgrade requests in relation to cached data. The apparatus has a plurality of processing units, at least some of which have associated cache storage. A coherent interconnect couples the plurality of master units with memory, the coherent interconnect having a snoop unit used to implement a cache coherency protocol when a request received by the coherent interconnect identifies a cacheable memory address within the memory. Contention management circuitry is provided to control contended access to a memory address by two or more processing units within the plurality of processing units. The coherent interconnect may receive, from a first processing unit having an associated cache storage, an ownership upgrade request specifying a target memory address, the ownership upgrade request indicating that a copy of data at the target memory address, as held in a shared state in the first processing unit's associated cache storage at a time the ownership upgrade request was issued, is required to have its state changed from the shared state to a unique state prior to the first processing circuitry performing a write operation to the data. The coherent interconnect is arranged to process the ownership upgrade request by referencing the snoop unit in order to determine whether the first processing unit's associated cache storage is identified as still holding a copy of the data at the target memory address at a time the ownership upgrade request is processed. In that event, a pass condition is identified for the ownership upgrade request independent of information held by the contention management circuitry for the target memory address.

    Apparatus and method for operating an issue queue

    公开(公告)号:US11327791B2

    公开(公告)日:2022-05-10

    申请号:US16546752

    申请日:2019-08-21

    Applicant: Arm Limited

    Abstract: An apparatus provides an issue queue having a first section and a second section. Each entry in each section stores operation information identifying an operation to be performed. Allocation circuitry allocates each item of received operation information to an entry in the first section or the second section. Selection circuitry selects from the issue queue, during a given selection iteration, an operation from amongst the operations whose required source operands are available. Availability update circuitry updates source operand availability for each entry whose operation information identifies as a source operand a destination operand of the selected operation in the given selection iteration. A deferral mechanism inhibits from selection, during a next selection iteration, any operation associated with an entry in the second section whose source operands are now available due to that operation having as a source operand the destination operand of the selected operation in the given selection iteration.

    Correlated addresses and prefetching

    公开(公告)号:US11263138B2

    公开(公告)日:2022-03-01

    申请号:US16176686

    申请日:2018-10-31

    Applicant: Arm Limited

    Abstract: An apparatus is provided that includes cache circuitry that comprises a plurality of cache lines. The cache circuitry treats one or more of the cache lines as trace lines each having correlated addresses and each being tagged by a trigger address. Prefetch circuitry causes data at the correlated addresses stored in the trace lines to be prefetched.

Patent Agency Ranking