-
公开(公告)号:US10956206B2
公开(公告)日:2021-03-23
申请号:US16372690
申请日:2019-04-02
Applicant: Arm Limited
Inventor: Damien Guillaume Pierre Payet , Lucas Garcia , Natalya Bondarenko , Stefano Ghiggini
IPC: G06F9/46 , G06F12/0817 , G06F12/0893 , G06F11/14
Abstract: A technique is described for managing a cache structure in a system employing transactional memory. The apparatus comprises processing circuitry to perform data processing operations in response to instructions, the processing circuitry comprising transactional memory support circuitry to support execution of a transaction, and a cache structure comprising a plurality of cache entries for storing data for access by the processing circuitry. Each cache entry has associated therewith an allocation tag, and allocation tag control circuitry is provided to control use of a plurality of allocation tags and to maintain an indication of a current state of each of those allocation tags. The transactional memory support circuitry is arranged, when initial data in a chosen cache entry is to be written to during the transaction, to cause a backup copy of the initial data to be stored in a further cache entry and to cause the allocation tag control circuitry to associate with that further cache entry a selected allocation tag selected for the transaction. The current state of that selected allocation tag is updated to a first state which prevents the processing circuitry from accessing that further cache entry. In the event that the transaction is aborted prior to reaching a transaction end point, the transactional memory support circuitry causes the chosen cache entry to be invalidated, and the allocation tag control circuitry changes the state of the selected allocation tag to a second state that allows the processing circuitry to access the further cache entry. As a result, this enables a hit to subsequently be detected within the cache structure for the initial data without a requirement to refetch the initial data into the cache structure. This can give rise to significant performance enhancements.
-
公开(公告)号:US10783031B2
公开(公告)日:2020-09-22
申请号:US16105129
申请日:2018-08-20
Applicant: Arm Limited
Inventor: Damien Guillaume Pierre Payet , Lucas Garcia , Natalya Bondarenko , Stefano Ghiggini
Abstract: An apparatus comprises processing circuitry, transactional memory support circuitry and a cache. The processing circuitry processes threads of data processing, and the transactional memory support circuitry supports execution of a transaction within a thread, including tracking a read set of addresses, comprising addresses accessed by read instructions within the transaction. A transaction comprises instructions for which the processing circuitry is configured to prevent commitment of the results of speculatively executed instruction until the transaction has completed. The cache has a plurality of entries, each associated with an address and specifying a replaceable-information value for that address that comprises information for which, outside of the transaction, processing would be functionally correct even if the information was incorrect. While the transaction is pending, the transactional memory support circuitry identifies, based on an encoding of the replaceable-information values, read-set information identifying addresses in the read set tracked for the transaction.
-
公开(公告)号:US12182427B2
公开(公告)日:2024-12-31
申请号:US17966071
申请日:2022-10-14
Applicant: Arm Limited
Inventor: Stefano Ghiggini , Natalya Bondarenko , Luca Nassi , Geoffray Matthieu Lacourba , Huzefa Moiz Sanjeliwala , Miles Robert Dooley , Abhishek Raja
IPC: G06F3/06
Abstract: An apparatus is provided for controlling the operating mode of control circuitry, such that the control circuitry may change between two operating modes. In an allocation mode, data that is loaded in response to an instruction is allocated into storage circuitry from an intermediate buffer, and the data is read from the storage circuitry. In a non-allocation mode, the data is not allocated to the storage circuitry, and is read directly from intermediate buffer. The control of the operating mode may be performed by mode control circuitry, and the mode may be changed in dependence on the type of instruction that calls the data, and whether the data may be used again in the near future, or whether it is expected to be used only once.
-
4.
公开(公告)号:US12045618B2
公开(公告)日:2024-07-23
申请号:US17209515
申请日:2021-03-23
Applicant: Arm Limited
Inventor: Natalya Bondarenko , Stefano Ghiggini , Geoffray Matthieu Lacourba , Cédric Denis Robert Airaud
CPC classification number: G06F9/3802 , G06F9/383 , G06N20/00
Abstract: The invention provides a data processing apparatus and a data processing method for generating prefetches of data for use during execution of instructions by processing circuitry. The prefetches that are generated are based on a nested prefetch pattern. The nested prefetch pattern comprises a first pattern and a second pattern. The first pattern is defined by a first address offset between sequentially accessed addresses and a first observed number of the sequentially accessed addresses separated by the first address offset. The second pattern is defined by a second address offset between sequential iterations of the first pattern and a second observed number of the sequential iterations of the first pattern separated by the second address offset.
-
公开(公告)号:US11853220B2
公开(公告)日:2023-12-26
申请号:US17501272
申请日:2021-10-14
Applicant: Arm Limited
IPC: G06F12/0862 , G06F12/0891 , G06F13/16 , G06F12/02 , G06F18/214
CPC classification number: G06F12/0862 , G06F12/0238 , G06F12/0891 , G06F13/1621 , G06F13/1668 , G06F18/214
Abstract: An apparatus comprises a cache to store information, items of information in the cache being associated with addresses; cache lookup circuitry to perform lookups in the cache; and a prefetcher to prefetch items of information into the cache in advance of an access request being received for said items of information. The prefetcher selects addresses to train the prefetcher. In response to determining that a cache lookup specifying a given address has resulted in a hit and determining that a cache lookup previously performed in response to a prefetch request issued by the prefetcher for the given address resulted in a hit, the prefetcher selects the given address as an address to be used to train the prefetcher.
-
6.
公开(公告)号:US11163691B2
公开(公告)日:2021-11-02
申请号:US16451384
申请日:2019-06-25
Applicant: Arm Limited
Inventor: Stefano Ghiggini , Natalya Bondarenko , Damien Guillaume Pierre Payet , Lucas Garcia
IPC: G06F12/10 , G06F12/0862 , G06F12/1027
Abstract: Examples of the present disclosure relate to an apparatus comprising processing circuitry to perform data processing operations, storage circuitry to store data for access by the processing circuitry, address translation circuitry to maintain address translation data for translating virtual memory addresses into corresponding physical memory addresses, and prefetch circuitry. The prefetch circuitry is arranged to prefetch first data into the storage circuitry in anticipation of the first data being required for performing the data processing operations. The prefetching comprises, based on a prediction scheme, predicting a first virtual memory address associated with the first data, accessing the address translation circuitry to determine a first physical memory address corresponding to the first virtual memory address, and retrieving the first data based on the first physical memory address corresponding to the first virtual memory address. The prefetch circuitry is further arranged, based on the prediction scheme, to predict a second virtual memory address associated with second data in anticipation of the second data being prefetched, and to provide the predicted second virtual memory address to the address translation circuitry to enable the address translation circuitry to obtain the address translation data for the second virtual memory address.
-
-
-
-
-