-
公开(公告)号:US10102143B2
公开(公告)日:2018-10-16
申请号:US15294031
申请日:2016-10-14
Applicant: ARM LIMITED
Inventor: Barry Duane Williamson , Michael Filippo , . Abhishek Raja , Adrian Montero , Miles Robert Dooley
IPC: G06F12/08 , G06F12/1045 , G06F12/128 , G06F12/1009
Abstract: A data processing system 2 includes an address translation cache 12 to store a plurality of address translation entries. Eviction control circuitry 10 selects a victim entry for eviction from address translation cache 12 using an eviction control parameter. The address translation cache 12 can store multiple different types of entry corresponding to respective different levels of address translation within a multiple-level page table walk. The different types of entry have different eviction control parameters assigned at the time of allocation. Eviction from the address translation cache is dependent upon the entry type, as well as the subsequent accesses to the entry concerned and the other entries within the address translation cache.
-
公开(公告)号:US11256623B2
公开(公告)日:2022-02-22
申请号:US15427459
申请日:2017-02-08
Applicant: ARM Limited
Inventor: Phanindra Kumar Mannava , Bruce James Mathewson , Jamshed Jalal , Klas Magnus Bruce , Michael Filippo , Paul Gilbert Meyer , Alex James Waugh , Geoffray Matthieu Lacourba
IPC: G06F12/0831 , G06F12/0808
Abstract: Apparatus and a corresponding method of operating a hub device, and a target device, in a coherent interconnect system are presented. A cache pre-population request of a set of coherency protocol transactions in the system is received from a requesting master device specifying at least one data item and the hub device responds by cause a cache pre-population trigger of the set of coherency protocol transactions specifying the at least one data item to be transmitted to a target device. This trigger can cause the target device to request that the specified at least one data item is retrieved and brought into cache. Since the target device can therefore decide whether to respond to the trigger or not, it does not receive cached data unsolicited, simplifying its configuration, whilst still allowing some data to be pre-cached.
-
公开(公告)号:US11237974B2
公开(公告)日:2022-02-01
申请号:US16552001
申请日:2019-08-27
Applicant: Arm Limited
Inventor: Michael Brian Schinzler , Michael Filippo
IPC: G06F12/0875 , G06F9/30 , G06F9/28
Abstract: A data processing apparatus is provided. The data processing apparatus includes fetch circuitry to fetch instructions from storage circuitry. Decode circuitry decodes each of the instructions into one or more operations and provides the one or more operations to one or more execution units. The decode circuitry is adapted to decode at least one of the instructions into a plurality of operations. Cache circuitry caches the one or more operations and at least one entry of the cache circuitry is a compressed entry that represents the plurality of operations.
-
公开(公告)号:US11003454B2
公开(公告)日:2021-05-11
申请号:US16514124
申请日:2019-07-17
Applicant: Arm Limited
Inventor: Michael Brian Schinzler , Michael Filippo , Yasuo Ishii
Abstract: Apparatuses for data processing and methods of data processing are provided. A data processing apparatus performs data processing operations in response to a sequence of instructions including performing speculative execution of at least some of the sequence of instructions. In response to a branch instruction the data processing apparatus predicts whether or not the branch is taken or not taken further speculative instruction execution is based on that prediction. A path speculation cost is calculated in dependence on a number of recently flushed instructions and a rate at which speculatively executed instructions are issued may be modified based on the path speculation cost.
-
公开(公告)号:US10761987B2
公开(公告)日:2020-09-01
申请号:US16202171
申请日:2018-11-28
Applicant: Arm Limited
Inventor: Jamshed Jalal , Mark David Werkheiser , Michael Filippo , Klas Magnus Bruce , Paul Gilbert Meyer
IPC: G06F12/0815 , G06F13/16
Abstract: An apparatus and method are provided for processing ownership upgrade requests in relation to cached data. The apparatus has a plurality of processing units, at least some of which have associated cache storage. A coherent interconnect couples the plurality of master units with memory, the coherent interconnect having a snoop unit used to implement a cache coherency protocol when a request received by the coherent interconnect identifies a cacheable memory address within the memory. Contention management circuitry is provided to control contended access to a memory address by two or more processing units within the plurality of processing units. The coherent interconnect may receive, from a first processing unit having an associated cache storage, an ownership upgrade request specifying a target memory address, the ownership upgrade request indicating that a copy of data at the target memory address, as held in a shared state in the first processing unit's associated cache storage at a time the ownership upgrade request was issued, is required to have its state changed from the shared state to a unique state prior to the first processing circuitry performing a write operation to the data. The coherent interconnect is arranged to process the ownership upgrade request by referencing the snoop unit in order to determine whether the first processing unit's associated cache storage is identified as still holding a copy of the data at the target memory address at a time the ownership upgrade request is processed. In that event, a pass condition is identified for the ownership upgrade request independent of information held by the contention management circuitry for the target memory address.
-
公开(公告)号:US10713187B2
公开(公告)日:2020-07-14
申请号:US16521621
申请日:2019-07-25
Applicant: ARM Limited
Inventor: Michael Filippo , Jamshed Jalal , Klas Magnus Bruce , Paul Gilbert Meyer , David Joseph Hawkins , Phanindra Kumar Mannava , Joseph Michael Pusdesris
IPC: G06F13/00 , G06F13/16 , G06F13/364 , G06F12/0864 , G06F13/42 , G06F13/40 , G06F12/0831 , G06F12/0844
Abstract: A memory controller comprises memory access circuitry configured to initiate a data access of data stored in a memory in response to a data access hint message received from another node in data communication with the memory controller; to access data stored in the memory in response to a data access request received from another node in data communication with the memory controller and to provide the accessed data as a data access response to the data access request.
-
公开(公告)号:US09996471B2
公开(公告)日:2018-06-12
申请号:US15194902
申请日:2016-06-28
Applicant: ARM Limited
Inventor: Ali Saidi , Kshitij Sudan , Andrew Joseph Rushing , Andreas Hansson , Michael Filippo
IPC: G06F12/0871 , G06F12/0873 , G06F12/0895
CPC classification number: G06F12/0871 , G06F12/0868 , G06F12/0873 , G06F12/0895 , G06F2212/305 , G06F2212/401 , G06F2212/466
Abstract: Cache line data and metadata are compressed and stored in first and, optionally, second memory regions, the metadata including an address tag When the compressed data fit entirely within a primary block in the first memory region, both data and metadata are retrieved in a single memory access. Otherwise, overflow data is stored in an overflow block in the second memory region. The first and second memory regions may be located in the same row of a DRAM, for example, or in different regions of a DRAM and may be configured to enable standard DRAM components to be used. Compression and decompression logic circuits may be included in a memory controller.
-
公开(公告)号:US11663014B2
公开(公告)日:2023-05-30
申请号:US16550612
申请日:2019-08-26
Applicant: Arm Limited
Inventor: Abhishek Raja , Rakesh Shaji Lal , Michael Filippo , Glen Andrew Harris , Vasu Kudaravalli , Huzefa Moiz Sanjeliwala , Jason Setter
CPC classification number: G06F9/3842 , G06F9/30043 , G06F9/30094 , G06F9/30101 , G06F9/3857 , G06F9/3861
Abstract: A data processing apparatus is provided that comprises fetch circuitry to fetch an instruction stream comprising a plurality of instructions, including a status updating instruction, from storage circuitry. Status storage circuitry stores a status value. Execution circuitry executes the instructions, wherein at least some of the instructions are executed in an order other than in the instruction stream. For the status updating instruction, the execution circuitry is adapted to update the status value based on execution of the status updating instruction. Flush circuitry flushes, when the status storage circuitry is updated, following instructions that appear after the status updating instruction in the instruction stream.
-
公开(公告)号:US11327791B2
公开(公告)日:2022-05-10
申请号:US16546752
申请日:2019-08-21
Applicant: Arm Limited
Inventor: Michael David Achenbach , Robert Greg McDonald , Nicholas Andrew Pfister , Kelvin Domnic Goveas , Michael Filippo , . Abhishek Raja , Zachary Allen Kingsbury
Abstract: An apparatus provides an issue queue having a first section and a second section. Each entry in each section stores operation information identifying an operation to be performed. Allocation circuitry allocates each item of received operation information to an entry in the first section or the second section. Selection circuitry selects from the issue queue, during a given selection iteration, an operation from amongst the operations whose required source operands are available. Availability update circuitry updates source operand availability for each entry whose operation information identifies as a source operand a destination operand of the selected operation in the given selection iteration. A deferral mechanism inhibits from selection, during a next selection iteration, any operation associated with an entry in the second section whose source operands are now available due to that operation having as a source operand the destination operand of the selected operation in the given selection iteration.
-
公开(公告)号:US11263138B2
公开(公告)日:2022-03-01
申请号:US16176686
申请日:2018-10-31
Applicant: Arm Limited
Inventor: Joseph Michael Pusdesris , Miles Robert Dooley , Michael Filippo
IPC: G06F12/0862
Abstract: An apparatus is provided that includes cache circuitry that comprises a plurality of cache lines. The cache circuitry treats one or more of the cache lines as trace lines each having correlated addresses and each being tagged by a trigger address. Prefetch circuitry causes data at the correlated addresses stored in the trace lines to be prefetched.
-
-
-
-
-
-
-
-
-