-
公开(公告)号:US20240126458A1
公开(公告)日:2024-04-18
申请号:US17966071
申请日:2022-10-14
Applicant: Arm Limited
Inventor: Stefano GHIGGINI , Natalya Bondarenko , Luca NASSI , Geoffray Matthieu LACOURBA , Huzefa Moiz SANJELIWALA , Miles Robert DOOLEY , . ABHISHEK RAJA
IPC: G06F3/06
CPC classification number: G06F3/0634 , G06F3/0604 , G06F3/0659 , G06F3/0673
Abstract: An apparatus is provided for controlling the operating mode of control circuitry, such that the control circuitry may change between two operating modes. In an allocation mode, data that is loaded in response to an instruction is allocated into storage circuitry from an intermediate buffer, and the data is read from the storage circuitry. In a non-allocation mode, the data is not allocated to the storage circuitry, and is read directly from intermediate buffer. The control of the operating mode may be performed by mode control circuitry, and the mode may be changed in dependence on the type of instruction that calls the data, and whether the data may be used again in the near future, or whether it is expected to be used only once.
-
公开(公告)号:US20250021480A1
公开(公告)日:2025-01-16
申请号:US18350217
申请日:2023-07-11
Applicant: Arm Limited
Inventor: Natalya BONDARENKO , Stefano GHIGGINI , Kamil GARIFULLIN , Fabian GRUBER , . ABHISHEK RAJA , Devin S. LAFFORD
IPC: G06F12/0811 , G06F12/0862 , G06F12/0871
Abstract: Apparatuses, methods, systems, and chip-containing products are disclosed, which relate to an arrangement comprising a level N cache level and a level M cache level, where M is greater than N. The level N cache level comprises a plurality of linefill slots and performs a slot allocation procedure in response to a lookup miss in dependence on a linefill slot occupancy criterion. The slot allocation procedure comprises allocation of an available slot of the plurality of slots to a pending linefill request generated in response to the lookup miss. The level N cache level effects a modification of the slot allocation procedure in dependence on the linefill slot occupancy criterion and is responsive to the linefill slot occupancy criterion being fulfilled to cause a linefill delegation action to be instructed to the level M cache level.
-
公开(公告)号:US20220308880A1
公开(公告)日:2022-09-29
申请号:US17209515
申请日:2021-03-23
Applicant: Arm Limited
Inventor: Natalya BONDARENKO , Stefano GHIGGINI , Geoffray Matthieu LACOURBA , Cédric Denis Robert AIRAUD
Abstract: The invention provides a data processing apparatus and a data processing method for generating prefetches of data for use during execution of instructions by processing circuitry. The prefetches that are generated are based on a nested prefetch pattern. The nested prefetch pattern comprises a first pattern and a second pattern. The first pattern is defined by a first address offset between sequentially accessed addresses and a first observed number of the sequentially accessed addresses separated by the first address offset. The second pattern is defined by a second address offset between sequential iterations of the first pattern and a second observed number of the sequential iterations of the first pattern separated by the second address offset.
-
公开(公告)号:US20200065257A1
公开(公告)日:2020-02-27
申请号:US16451384
申请日:2019-06-25
Applicant: Arm Limited
Inventor: Stefano GHIGGINI , Natalya BONDARENKO , Damien Guillaume Pierre PAYET , Lucas GARCIA
IPC: G06F12/10 , G06F12/0862 , G06F12/1027
Abstract: Examples of the present disclosure relate to an apparatus comprising processing circuitry to perform data processing operations, storage circuitry to store data for access by the processing circuitry, address translation circuitry to maintain address translation data for translating virtual memory addresses into corresponding physical memory addresses, and prefetch circuitry. The prefetch circuitry is arranged to prefetch first data into the storage circuitry in anticipation of the first data being required for performing the data processing operations. The prefetching comprises, based on a prediction scheme, predicting a first virtual memory address associated with the first data, accessing the address translation circuitry to determine a first physical memory address corresponding to the first virtual memory address, and retrieving the first data based on the first physical memory address corresponding to the first virtual memory address. The prefetch circuitry is further arranged, based on the prediction scheme, to predict a second virtual memory address associated with second data in anticipation of the second data being prefetched, and to provide the predicted second virtual memory address to the address translation circuitry to enable the address translation circuitry to obtain the address translation data for the second virtual memory address.
-
公开(公告)号:US20230244606A1
公开(公告)日:2023-08-03
申请号:US17592022
申请日:2022-02-03
Applicant: Arm Limited
Inventor: Geoffray LACOURBA , Luca NASSI , Damien CATHRINE , Stefano GHIGGINI , Albin Pierrick TONNERRE
IPC: G06F12/0862
CPC classification number: G06F12/0862 , G06F2212/602
Abstract: Circuitry comprises a memory system to store data items; cache memory storage to store a copy of one or more data items, the cache memory storage comprising a hierarchy of two or more cache levels; detector circuitry to detect at least a property of data items for storage by the cache memory storage; and control circuitry to control eviction, from a given cache level, of a data item stored by the given cache level, the control circuitry being configured to select a destination to store a data item evicted from the given cache level in response to a detection by the detector circuitry.
-
公开(公告)号:US20230121686A1
公开(公告)日:2023-04-20
申请号:US17501272
申请日:2021-10-14
Applicant: Arm Limited
IPC: G06F12/0862 , G06F12/0891 , G06F12/02 , G06F13/16 , G06K9/62
Abstract: An apparatus comprises a cache to store information, items of information in the cache being associated with addresses; cache lookup circuitry to perform lookups in the cache; and a prefetcher to prefetch items of information into the cache in advance of an access request being received for said items of information. The prefetcher selects addresses to train the prefetcher. In response to determining that a cache lookup specifying a given address has resulted in a hit and determining that a cache lookup previously performed in response to a prefetch request issued by the prefetcher for the given address resulted in a hit, the prefetcher selects the given address as an address to be used to train the prefetcher.
-
7.
公开(公告)号:US20200057692A1
公开(公告)日:2020-02-20
申请号:US16105129
申请日:2018-08-20
Applicant: Arm Limited
Inventor: Damien Guillaume Pierre PAYET , Lucas GARCIA , Natalya BONDARENKO , Stefano GHIGGINI
IPC: G06F11/10
Abstract: An apparatus comprises processing circuitry, transactional memory support circuitry and a cache. The processing circuitry processes threads of data processing, and the transactional memory support circuitry supports execution of a transaction within a thread, including tracking a read set of addresses, comprising addresses accessed by read instructions within the transaction. A transaction comprises instructions for which the processing circuitry is configured to prevent commitment of the results of speculatively executed instruction until the transaction has completed. The cache has a plurality of entries, each associated with an address and specifying a replaceable-information value for that address that comprises information for which, outside of the transaction, processing would be functionally correct even if the information was incorrect. While the transaction is pending, the transactional memory support circuitry identifies, based on an encoding of the replaceable-information values, read-set information identifying addresses in the read set tracked for the transaction.
-
-
-
-
-
-