CACHE STORAGE TECHNIQUES
    21.
    发明申请

    公开(公告)号:US20200097410A1

    公开(公告)日:2020-03-26

    申请号:US16139517

    申请日:2018-09-24

    Applicant: Arm Limited

    Abstract: The present disclosure is concerned with improvements to cache systems that can be used to improve the performance (e.g. hit performance) and/or bandwidth within a memory hierarchy. For instance, a data processing apparatus is provided that comprises a cache. Access circuitry receives one or more requests for data and when the data is present in the cache the data is returned. Retrieval circuitry retrieves the data and stores the data in the cache, either proactively or in response to the one or more requests for the data. Control circuitry evicts the data from the cache and, in dependence on at least one condition, stores the data in the further cache. The at least one condition comprises a requirement that the data was stored into the cache proactively and that a number of the one or more requests is above a threshold value.

    Predicting a load value for a subsequent load operation

    公开(公告)号:US12229556B2

    公开(公告)日:2025-02-18

    申请号:US18353345

    申请日:2023-07-17

    Applicant: Arm Limited

    Abstract: Processing circuitry to execute load operations, each associated with an identifier. Prediction circuitry to receive a given load value associated with a given identifier, and to make, in dependence on the given load value, a prediction indicating a predicted load value for a subsequent load operation to be executed by the processing circuitry and an ID-delta value indicating a difference between the given identifier and an identifier of the subsequent load operation. The predicted load value being predicted in dependence on at least one occurrence of each of the given load value and the predicted load value during execution of a previously-executed sequence of load operations. The prediction circuitry is configured to determine the ID-delta value in dependence on a difference between identifiers associated with the at least one occurrence of each of the given load value and the predicted load value in the previously-executed sequence of load operations.

    Faulting address prediction for prefetch target address

    公开(公告)号:US11782845B2

    公开(公告)日:2023-10-10

    申请号:US17541007

    申请日:2021-12-02

    Applicant: Arm Limited

    CPC classification number: G06F12/1027

    Abstract: An apparatus comprises memory management circuitry to perform a translation table walk for a target address of a memory access request and to signal a fault in response to the translation table walk identifying a fault condition for the target address, prefetch circuitry to generate a prefetch request to request prefetching of information associated with a prefetch target address to a cache; and faulting address prediction circuitry to predict whether the memory management circuitry would identify the fault condition for the prefetch target address if the translation table walk was performed by the memory management circuitry for the prefetch target address. In response to a prediction that the fault condition would be identified for the prefetch target address, the prefetch circuitry suppresses the prefetch request and the memory management circuitry prevents the translation table walk being performed for the prefetch target address of the prefetch request.

    Producer prefetch filter
    24.
    发明授权

    公开(公告)号:US11775440B2

    公开(公告)日:2023-10-03

    申请号:US17579842

    申请日:2022-01-20

    Applicant: Arm Limited

    CPC classification number: G06F12/0862 G06F2212/1024 G06F2212/602

    Abstract: Indirect prefetch circuitry initiates a producer prefetch requesting return of producer data having a producer address and at least one consumer prefetch to request prefetching of consumer data having a consumer address derived from the producer data. A producer prefetch filter table stores producer filter entries indicative of previous producer addresses of previous producer prefetches. Initiation of a requested producer prefetch for producer data having a requested producer address is suppressed when a lookup of the producer prefetch filter table determines that the requested producer address hits against a producer filter entry of the table. The lookup of the producer prefetch filter table for the requested producer address depends on a subset of bits of the requested producer address including at least one bit which distinguishes different chunks of data within a same cache line.

    Prefetching at dynamically determined offsets

    公开(公告)号:US11416404B2

    公开(公告)日:2022-08-16

    申请号:US16743399

    申请日:2020-01-15

    Applicant: Arm Limited

    Abstract: There is provided a data processing apparatus comprising table circuitry to store a table that indicates, for a program counter value of an instruction that performs a memory access operation at a memory address, one or more offsets of the memory address and an associated confidence for each of the one or more offsets. Prefetch circuitry prefetches data based on each of the offsets in dependence on the associated confidence. Each of the offsets of the memory address is dynamically determined.

    PREFETCHING AT DYNAMICALLY DETERMINED OFFSETS

    公开(公告)号:US20210216461A1

    公开(公告)日:2021-07-15

    申请号:US16743399

    申请日:2020-01-15

    Applicant: Arm Limited

    Abstract: There is provided a data processing apparatus comprising table circuitry to store a table that indicates, for a program counter value of an instruction that performs a memory access operation at a memory address, one or more offsets of the memory address and an associated confidence for each of the one or more offsets. Prefetch circuitry prefetches data based on each of the offsets in dependence on the associated confidence. Each of the offsets of the memory address is dynamically determined.

    Cache storage techniques
    27.
    发明授权

    公开(公告)号:US10810126B2

    公开(公告)日:2020-10-20

    申请号:US16139517

    申请日:2018-09-24

    Applicant: Arm Limited

    Abstract: The present disclosure is concerned with improvements to cache systems that can be used to improve the performance (e.g. hit performance) and/or bandwidth within a memory hierarchy. For instance, a data processing apparatus is provided that comprises a cache. Access circuitry receives one or more requests for data and when the data is present in the cache the data is returned. Retrieval circuitry retrieves the data and stores the data in the cache, either proactively or in response to the one or more requests for the data. Control circuitry evicts the data from the cache and, in dependence on at least one condition, stores the data in the further cache. The at least one condition comprises a requirement that the data was stored into the cache proactively and that a number of the one or more requests is above a threshold value.

    Storage circuitry request tracking
    28.
    发明授权

    公开(公告)号:US10776043B2

    公开(公告)日:2020-09-15

    申请号:US16118610

    申请日:2018-08-31

    Applicant: Arm Limited

    Abstract: Storage circuitry is provided, that is designed to form part of a memory hierarchy. The storage circuitry comprises receiver circuitry for receiving a request to obtain data from the memory hierarchy. Transfer circuitry causes the data to be stored at a selected destination in response to the request, wherein the selected destination is selected in dependence on at least one selection condition. Tracker circuitry tracks the request while the request is unresolved. If at least one selection condition is met then the destination is the storage circuitry and otherwise the destination is other storage circuitry in the memory hierarchy.

Patent Agency Ranking