Cache retention data management
    21.
    发明授权

    公开(公告)号:US11200177B2

    公开(公告)日:2021-12-14

    申请号:US16327501

    申请日:2016-10-19

    Applicant: ARM LIMITED

    Abstract: A data processing system (2) incorporates a first exclusive cache memory (8, 10) and a second exclusive cache memory (14). A snoop filter (18) located together with the second exclusive cache memory on one side of the communication interface (12) serves to track entries within the first exclusive cache memory. The snoop filter includes retention data storage circuitry to store retention data for controlling retention of cache entries within the second exclusive cache memory. Retention data transfer circuitry (20) serves to transfer the retention data to and from the retention data storage circuitry within the snoop filter and the second cache memory as the cache entries concerned are transferred between the second exclusive cache memory and the first exclusive cache memory.

    Shortcut path for a branch target buffer

    公开(公告)号:US10817298B2

    公开(公告)日:2020-10-27

    申请号:US15335741

    申请日:2016-10-27

    Applicant: ARM LIMITED

    Abstract: An apparatus comprises a branch target buffer (BTB) to store predicted target addresses of branch instructions. In response to a fetch block address identifying a fetch block comprising two or more program instructions, the BTB performs a lookup to identify whether it stores one or more predicted target addresses for one or more branch instructions in the fetch block. When the BTB is identified in the lookup as storing predicted target addresses for more than one branch instruction in said fetch block, branch target selecting circuitry selects a next fetch block address from among the multiple predicted target addresses returned in the lookup. A shortcut path bypassing the branch target selecting circuitry is provided to forward a predicted target address identified in the lookup as the next fetch block address when a predetermined condition is satisfied.

    Scheduling in a data processing apparatus

    公开(公告)号:US10754687B2

    公开(公告)日:2020-08-25

    申请号:US16005811

    申请日:2018-06-12

    Applicant: Arm Limited

    Abstract: There is provided a data processing apparatus that includes processing circuitry for executing a plurality of instructions. Storage circuitry stores a plurality of entries, each entry relating to an instruction in the plurality of instructions and including a dependency field. The dependency field stores a data dependency of that instruction on a previous instruction in the plurality of instructions. Scheduling circuitry schedules the execution of the plurality of instructions in an order that depends on each data dependency. When the previous instruction is a single-cycle instruction, the dependency field includes a reference to one of the entries that relates to the previous instruction, otherwise, the data dependency field includes an indication of an output destination of the previous instruction.

    Data processing
    25.
    发明授权

    公开(公告)号:US10310862B2

    公开(公告)日:2019-06-04

    申请号:US15464727

    申请日:2017-03-21

    Applicant: ARM Limited

    Abstract: Data processing circuitry comprises out-of-order instruction execution circuitry to execute program instructions in an instruction execution order; a data store, to store information on a set of instructions for which execution has been initiated, the data store providing ordering information indicating the relative position of each instruction in the set of instructions with respect to a program code order; commit circuitry to commit the results of instructions executed by the instruction execution circuitry; one or more cumulative status registers configured to be set in response to a respective condition generated by execution of an instruction and then to remain set until an unset instruction is executed; and an identifier store, to store for at least those of the one or more cumulative status registers which are not currently set, an identifier of an instruction which is earliest in the program code order in the set of instructions and which generated a condition to set that cumulative status register.

    Cache hierarchy management
    26.
    发明授权

    公开(公告)号:US10268581B2

    公开(公告)日:2019-04-23

    申请号:US15479348

    申请日:2017-04-05

    Applicant: ARM Limited

    Abstract: A cache hierarchy and a method of operating the cache hierarchy are disclosed. The cache hierarchy comprises a first cache level comprising an instruction cache, and predecoding circuitry to perform a predecoding operation on instructions having a first encoding format retrieved from memory to generate predecoded instructions having a second encoding format for storage in the instruction cache. The cache hierarchy further comprises a second cache level comprising a cache and the first cache level instruction cache comprises cache control circuitry to control an eviction procedure for the instruction cache in which a predecoded instruction having the second encoding format which is evicted from the instruction cache is stored at the second cache level in the second encoding format. This enables the latency and power cost of the predecoding operation to be avoided when the predecoded instruction is then retrieved from the second cache level for storage in the first level instruction cache again.

    Cache bypass
    27.
    发明授权

    公开(公告)号:US10185663B2

    公开(公告)日:2019-01-22

    申请号:US15427409

    申请日:2017-02-08

    Applicant: ARM Limited

    Abstract: A data processing apparatus is provided including a memory hierarchy having a plurality of cache levels including a forwarding cache level, at least one bypassed cache level, and a receiver cache level. The forwarding cache level forwards a data access request relating to a given data value to the receiver cache level, inhibiting the at least one bypassed cache level from responding to the data access request. The receiver cache level includes presence determination circuitry for performing a determination as to whether the given data value is present in the at least one bypassed cache level. In response to the determination indicating that the data value is present in the at least one bypassed cache level, one of the at least one bypassed cache level is made to respond to the data access request.

Patent Agency Ranking