-
公开(公告)号:US20180253387A1
公开(公告)日:2018-09-06
申请号:US15446235
申请日:2017-03-01
Applicant: ARM Limited
Inventor: Huzefa Moiz SANJELIWALA , Klas Magnus BRUCE , Leigang KOU , Michael FILIPPO , Miles Robert DOOLEY , Matthew Andrew RAFACZ
IPC: G06F12/12 , G06F12/0897
CPC classification number: G06F12/0897 , G06F12/0862 , G06F2212/1028 , G06F2212/1041 , G06F2212/60
Abstract: A data processing apparatus is provided that includes a plurality of storage elements. Receiving circuitry receives a plurality of incoming data beats from cache circuitry and stores the incoming data beats in the storage elements. At least one existing data beat in the storage elements is replaced by an equal number of the incoming data beats belonging to a different cache line of the cache circuitry. The existing data beats stored in said plurality of storage elements form an incomplete cache line.
-
公开(公告)号:US20180225219A1
公开(公告)日:2018-08-09
申请号:US15427409
申请日:2017-02-08
Applicant: ARM Limited
Inventor: Jamshed JALAL , Michael FILIPPO , Bruce James MATHEWSON , Phanindra Kumar MANNAVA
IPC: G06F12/0888 , G06F12/0811 , G06F12/0862 , G06F12/0831 , G06F12/128
CPC classification number: G06F12/0888 , G06F12/0811 , G06F12/0831 , G06F12/0862 , G06F12/12 , G06F12/128 , G06F2201/885 , G06F2212/502 , G06F2212/602 , G06F2212/6046
Abstract: A data processing apparatus is provided including a memory hierarchy having a plurality of cache levels including a forwarding cache level, at least one bypassed cache level, and a receiver cache level. The forwarding cache level forwards a data access request relating to a given data value to the receiver cache level, inhibiting the at least one bypassed cache level from responding to the data access request. The receiver cache level includes presence determination circuitry for performing a determination as to whether the given data value is present in the at least one bypassed cache level. In response to the determination indicating that the data value is present in the at least one bypassed cache level, one of the at least one bypassed cache level is made to respond to the data access request.
-
公开(公告)号:US20170212844A1
公开(公告)日:2017-07-27
申请号:US15002648
申请日:2016-01-21
Applicant: ARM LIMITED
Inventor: Michael John WILLIAMS , Michael FILIPPO , Hazim SHAFI
CPC classification number: G06F12/1027 , G06F3/0611 , G06F3/0653 , G06F3/0673 , G06F11/3409 , G06F11/3466 , G06F12/1009 , G06F2212/1024 , G06F2212/68
Abstract: An apparatus includes processing circuitry to process instructions, some of which may require addresses to be translated. The apparatus also includes address translation circuitry to translate addresses in response to instruction processed by the processing circuitry. Furthermore, the apparatus also includes translation latency measuring circuitry to measure a latency of at least part of an address translation process performed by the address translation circuitry in response to a given instruction.
-
公开(公告)号:US20210019150A1
公开(公告)日:2021-01-21
申请号:US16514124
申请日:2019-07-17
Applicant: Arm Limited
Inventor: Michael Brian SCHINZLER , Michael FILIPPO , Yasuo ISHII
Abstract: Apparatuses for data processing and methods of data processing are provided. A data processing apparatus performs data processing operations in response to a sequence of instructions including performing speculative execution of at least some of the sequence of instructions. In response to a branch instruction the data processing apparatus predicts whether or not the branch is taken or not taken further speculative instruction execution is based on that prediction. A path speculation cost is calculated in dependence on a number of recently flushed instructions and a rate at which speculatively executed instructions are issued may be modified based on the path speculation cost.
-
15.
公开(公告)号:US20190188149A1
公开(公告)日:2019-06-20
申请号:US15848397
申请日:2017-12-20
Applicant: Arm Limited
Inventor: ABHISHEK RAJA , Michael FILIPPO
IPC: G06F12/1036 , G06F12/1045 , G06F12/109 , G06F15/78 , G06F12/0891
CPC classification number: G06F12/1036 , G06F12/0891 , G06F12/1063 , G06F12/109 , G06F15/7839
Abstract: An apparatus and method are provided for determining address translation data to be stored within an address translation cache. The apparatus comprises an address translation cache having a plurality of entries, where each entry stores address translation data used when converting a virtual address into a corresponding physical address of a memory system. Control circuitry is used to perform an allocation process to determine the address translation data to be stored in each entry. Via an interface of the apparatus, access requests are received from a request source, where each access request identifies a virtual address. Prefetch circuitry is responsive to a contiguous access condition being detected from the access requests received by the interface, to retrieve one or more descriptors from a page table, where each descriptor is associated with a virtual page, in order to produce candidate coalesced address translation data relating to multiple contiguous virtual pages. At an appropriate point, the prefetch circuitry triggers the control circuitry to allocate, into a selected entry of the address translation cache, coalesced address translation data that is derived from the candidate coalesced address translation data. Such an approach has been found to provide a particularly efficient mechanism for creating coalesced address translation data for allocating into the address translation cache, without impacting the latency of the servicing of ongoing requests from the request source.
-
公开(公告)号:US20180293166A1
公开(公告)日:2018-10-11
申请号:US15479348
申请日:2017-04-05
Applicant: ARM Limited
Inventor: Michael FILIPPO , Klas Magnus BRUCE , Vasu KUDARAVALLI , Adam GEORGE , Muhammad Umar FAROOQ , Joseph Michael PUSDESRIS
IPC: G06F12/0811 , G06F12/0875 , G06F12/0891 , G06F12/0815 , G06F11/10
CPC classification number: G06F12/0811 , G06F11/1064 , G06F12/0815 , G06F12/0875 , G06F12/0891 , G06F2212/452 , G06F2212/62
Abstract: A cache hierarchy and a method of operating the cache hierarchy are disclosed. The cache hierarchy comprises a first cache level comprising an instruction cache, and predecoding circuitry to perform a predecoding operation on instructions having a first encoding format retrieved from memory to generate predecoded instructions having a second encoding format for storage in the instruction cache. The cache hierarchy further comprises a second cache level comprising a cache and the first cache level instruction cache comprises cache control circuitry to control an eviction procedure for the instruction cache in which a predecoded instruction having the second encoding format which is evicted from the instruction cache is stored at the second cache level in the second encoding format. This enables the latency and power cost of the predecoding operation to be avoided when the predecoded instruction is then retrieved from the second cache level for storage in the first level instruction cache again.
-
公开(公告)号:US20180239714A1
公开(公告)日:2018-08-23
申请号:US15437581
申请日:2017-02-21
Applicant: ARM Limited
Inventor: ABHISHEK RAJA , Michael FILIPPO
IPC: G06F12/1045 , G06F12/0864 , G06F12/1009
CPC classification number: G06F12/1027 , G06F12/0848 , G06F12/0864 , G06F12/1009 , G06F2212/1044 , G06F2212/502 , G06F2212/6032 , G06F2212/651 , G06F2212/652 , G06F2212/681
Abstract: An apparatus and method are provided for making efficient use of address translation cache resources. The apparatus has an address translation cache having a plurality of entries, where each entry is used to store address translation data used when converting a virtual address into a corresponding physical address of a memory system. Each item of address translation data has a page size indication for a page within the memory system that is associated with that address translation data. Allocation circuitry performs an allocation process to determine the address translation data to be stored in each entry. Further, mode control circuitry is used to switch a mode of operation of the apparatus between a non-skewed mode and at least one skewed mode, dependent on a page size analysis operation. The address translation cache is organised as a plurality of portions, and in the non-skewed mode the allocation circuitry is arranged, when performing the allocation process, to permit the address translation data to be allocated to any of the plurality of portions. In contrast, when in the at least one skewed mode, the allocation circuitry is arranged to reserve at least one portion for allocation of address translation data associated with pages of a first page size and at least one other portion for allocation of address translation data associated with pages of a second page size different to the first page size.
-
公开(公告)号:US20180225216A1
公开(公告)日:2018-08-09
申请号:US15427421
申请日:2017-02-08
Applicant: ARM Limited
Inventor: Michael FILIPPO , Jamshed JALAL , Klas Magnus BRUCE , Alex James WAUGH , Geoffray LACOURBA , Paul Gilbert MEYER , Bruce James MATHEWSON , Phanindra Kumar MANNAVA
IPC: G06F12/0862 , G06F15/78
CPC classification number: G06F12/0862 , G06F11/34 , G06F12/0811 , G06F12/0833 , G06F15/7825 , G06F2212/502 , G06F2212/507
Abstract: Data processing apparatus comprises a data access requesting node; data access circuitry to receive a data access request from the data access requesting node and to route the data access request for fulfilment by one or more data storage nodes selected from a group of two or more data storage nodes; and indication circuitry to provide a source indication to the data access requesting node, to indicate an attribute of the one or more data storage nodes which fulfilled the data access request; the data access requesting node being configured to vary its operation in response to the source indication.
-
公开(公告)号:US20180225214A1
公开(公告)日:2018-08-09
申请号:US15427459
申请日:2017-02-08
Applicant: ARM Limited
Inventor: Phanindra Kumar MANNAVA , Bruce James MATHEWSON , Jamshed JALAL , Klas Magnus BRUCE , Michael FILIPPO , Paul Gilbert MEYER , Alex James WAUGH , Geoffray Matthieu LACOURBA
IPC: G06F12/0831 , G06F9/46
CPC classification number: G06F12/0835 , G06F12/0808 , G06F2212/1024 , G06F2212/621
Abstract: Apparatus and a corresponding method of operating a hub device, and a target device, in a coherent interconnect system are presented. A cache pre-population request of a set of coherency protocol transactions in the system is received from a requesting master device specifying at least one data item and the hub device responds by cause a cache pre-population trigger of the set of coherency protocol transactions specifying the at least one data item to be transmitted to a target device. This trigger can cause the target device to request that the specified at least one data item is retrieved and brought into cache. Since the target device can therefore decide whether to respond to the trigger or not, it does not receive cached data unsolicited, simplifying its configuration, whilst still allowing some data to be pre-cached.
-
公开(公告)号:US20200057640A1
公开(公告)日:2020-02-20
申请号:US16103995
申请日:2018-08-16
Applicant: Arm Limited
Inventor: Curtis Glenn DUNHAM , Pavel SHAMIS , Jamshed JALAL , Michael FILIPPO
IPC: G06F9/30
Abstract: A system, apparatus and method for ordering a sequence of processing transactions. The method includes accessing, from a memory, a program sequence of operations that are to be executed. Instructions are received, some of them having an identifier, or mnemonic, that is used to distinguish those identified operations from other operations that do not have an identifier, or mnemonic. The mnemonic indicates a distribution of the execution of the program sequence of operations. The program sequence of operations is grouped based on the mnemonic such that certain operations are separated from other operations.
-
-
-
-
-
-
-
-
-