-
公开(公告)号:US10853262B2
公开(公告)日:2020-12-01
申请号:US16342644
申请日:2017-11-29
Applicant: ARM LIMITED
IPC: G06F9/26 , G06F12/1009 , G11C11/408 , G06F12/1027 , G11C8/06 , G11C11/4096
Abstract: Memory address translation apparatus comprises page table access circuitry to access a page table to retrieve translation data; a translation data buffer to store one or more instances of the translation data, comprising: an array of storage locations arranged in rows and columns; a row buffer comprising a plurality of entries and comparison circuitry responsive to a key value dependent upon at least the initial memory address, to compare the key value with information stored in each of at least one key entry and an associated value entry for storing at least a representation of a corresponding output memory address, and to identify which of the at least one key entry, if any, is a matching key entry storing information matching the key value; and output circuitry to output, when there is a matching key entry, at least the representation of the output memory address.
-
公开(公告)号:US10761988B2
公开(公告)日:2020-09-01
申请号:US16142330
申请日:2018-09-26
Applicant: Arm Limited
IPC: G06F12/0864 , G06F12/1045
Abstract: Aspects of the present disclosure relate to an apparatus comprising a data array having locality-dependent latency characteristics such that an access to an open unit of the data array has a lower latency than an access to a closed unit of the data array. Set associative cache indexing circuitry determines, in response to a request for data associated with a target address, a cache set index. Mapping circuitry identifies, in response to the index, a set of data array locations corresponding to the index, according to a mapping in which a given unit of the data array comprises locations corresponding to a plurality of consecutive indices, and at least two locations of the set of locations corresponding to the same index are in different units of the data array. Cache access circuitry accesses said data from one of the set of data array locations.
-
公开(公告)号:US10712965B2
公开(公告)日:2020-07-14
申请号:US15806580
申请日:2017-11-08
Applicant: ARM LIMITED
Inventor: Andreas Lars Sandberg , Nikos Nikoleris , David Hennah Mansell
IPC: G06F3/06 , G06F12/10 , G06F12/1027 , G06F12/1009 , G06F12/0831
Abstract: An apparatus and method are provided for transferring data between address ranges in memory. The apparatus comprises a data transfer controller, that is responsive to a data transfer request received by the apparatus from a processing element, to perform a transfer operation to transfer data from at least one source address range in memory to at least one destination address range in the memory. A redirect controller is then arranged, whilst the transfer operation is being performed, to intercept an access request that specifies a target address within a target address range, and to perform a memory redirection operation so as to cause the access request to be processed without awaiting completion of the transfer operation. Via such an approach, the apparatus can effectively hide from the source of the access request the fact that the transfer operation is in progress, and hence the transfer operation can be arranged to occur in the background, and in a manner that is transparent to the software executing on the source that has issued the access request.
-
公开(公告)号:US10860215B2
公开(公告)日:2020-12-08
申请号:US16152485
申请日:2018-10-05
Applicant: Arm Limited
Inventor: Radhika Sanjeev Jagtap , Nikos Nikoleris , Andreas Lars Sandberg
IPC: G06F3/06
Abstract: An apparatus comprises control circuitry to control access to a memory implemented using a memory technology providing variable access latency. The control circuitry has request handling circuitry to identify an execution context switch comprising a transition from servicing memory access requests associated with a first execution context to servicing memory access requests associated with a second execution context. At least when the execution context switch meets a predetermined condition, a delay masking action is triggered to control subsequent memory access requests associated with the second execution context, for which the required data is already stored in the memory, to be serviced with a response delay which is independent of which addresses were accessed by the memory access requests associated with the first execution context. This can help guard against attacks which aim to exploit variation in response latency to gain insight into the addresses accessed by a victim execution context.
-
公开(公告)号:US10831673B2
公开(公告)日:2020-11-10
申请号:US16181474
申请日:2018-11-06
Applicant: Arm Limited
Inventor: Andreas Lars Sandberg , Nikos Nikoleris , Prakash S. Ramrakhyani
IPC: G06F12/1027
Abstract: Memory address translation apparatus comprises page table access circuitry to access page table data to retrieve translation data defining an address translation between an initial memory address in an initial memory address space, and a corresponding output memory address in an output address space; a translation data buffer to store, for a subset of the virtual address space, one or more instances of the translation data; and control circuitry, responsive to an input initial memory address to be translated, to request retrieval of translation data for the input initial memory address from the translation data buffer and, before completion of processing of the request for retrieval from the translation data buffer, to initiate retrieval of translation data for the input initial memory address by the page table access circuitry.
-
公开(公告)号:US11263133B2
公开(公告)日:2022-03-01
申请号:US16979624
申请日:2019-03-12
Applicant: Arm Limited
Inventor: Andreas Lars Sandberg , Stephan Diestelhorst , Nikos Nikoleris , Ian Michael Caulfield , Peter Richard Greenhalgh , Frederic Claude Marie Piry , Albin Pierrick Tonnerre
IPC: G06F12/0802
Abstract: Coherency control circuitry (10) supports processing of a safe-speculative-read transaction received from a requesting master device (4). The safe-speculative-read transaction is of a type requesting that target data is returned to a requesting cache (11) of the requesting master device (4) while prohibiting any change in coherency state associated with the target data in other caches (12) in response to the safe-speculative-read transaction. In response, at least when the target data is cached in a second cache associated with a second master device, at least one of the coherency control circuitry (10) and the second cache (12) is configured to return a safe-speculative-read response while maintaining the target data in the same coherency state within the second cache. This helps to mitigate against speculative side-channel attacks.
-
公开(公告)号:US10860495B2
公开(公告)日:2020-12-08
申请号:US16464019
申请日:2017-09-15
Applicant: ARM LIMITED
Inventor: Andreas Hansson , Nikos Nikoleris , Wendy Arnott Elsasser
IPC: G06F12/08 , G06F12/10 , G06F12/0895 , G06F12/0864 , G11C11/16 , G11C11/406
Abstract: Storage circuitry comprises an array of storage locations arranged in rows and columns, a row buffer comprising a plurality of entries each to store information from a storage location at a corresponding column of an active row of the array, and comparison circuitry responsive to a tag-matching command specifying a tag value to compare the tag value with information stored in each of a subset of two or more entries of the row buffer. The comparison circuitry identifies which of the subset of entries, if any, is a matching entry storing information matching the tag value. This allows memory technologies such as DRAM to be used more efficiently as a set-associative cache.
-
公开(公告)号:US10705848B2
公开(公告)日:2020-07-07
申请号:US16018440
申请日:2018-06-26
Applicant: Arm Limited
Inventor: Ilias Vougioukas , Stephan Diestelhorst , Andreas Lars Sandberg , Nikos Nikoleris
Abstract: A TAGE branch predictor has, as its fallback predictor, a perceptron predictor. This provides a branch predictor which reduces the penalty of context switches and branch prediction state flushes.
-
公开(公告)号:US11657003B2
公开(公告)日:2023-05-23
申请号:US16778040
申请日:2020-01-31
Applicant: Arm Limited
Inventor: Ilias Vougioukas , Nikos Nikoleris , Andreas Lars Sandberg , Stephan Diestelhorst
IPC: G06F12/10 , G06F12/1036 , G06F12/1027 , G06N5/04 , G06F9/48
CPC classification number: G06F12/1036 , G06F9/4806 , G06F12/1027 , G06N5/04 , G06F2212/681
Abstract: Apparatus comprises two or more processing devices each having an associated translation lookaside buffer to store translation data defining address translations between virtual and physical memory addresses, each address translation being associated with a respective virtual address space; and control circuitry to control the transfer of at least a subset of the translation data from the translation lookaside buffer associated with a first processing device to the translation lookaside buffer associated with a second, different, processing device.
-
公开(公告)号:US11036639B2
公开(公告)日:2021-06-15
申请号:US15864062
申请日:2018-01-08
Applicant: ARM Limited
IPC: G06F12/00 , G06F12/0864 , G06F12/0862 , G06F12/0895 , G06F12/0897 , G06F12/0888
Abstract: A cache apparatus is provided comprising a data storage structure providing N cache ways that each store data as a plurality of cache blocks. The data storage structure is organised as a plurality of sets, where each set comprises a cache block from each way, and further the data storage structure comprises a first data array and a second data array, where at least the second data array is set associative. A set associative tag storage structure stores a tag value for each cache block, with that set associative tag storage structure being shared by the first and second data arrays. Control circuitry applies an access likelihood policy to determine, for each set, a subset of the cache blocks of that set to be stored within the first data array. Access circuitry is then responsive to an access request to perform a lookup operation within an identified set of the set associative tag storage structure overlapped with an access operation to access within the first data array the subset of the cache blocks for the identified set. In the event of a hit condition being detected that identifies a cache block present in the first data array, that access request is then processed using the cache block accessed within the first data array. If instead a hit condition is detected that identifies a cache block absent in the first data array, then a further access operation is performed to access the identified cache block within a selected way of the second data array. Such a cache structure provides a high performance and energy efficient mechanism for storing cached data.
-
-
-
-
-
-
-
-
-