Apparatus and method for controlling allocation of information into a cache storage

    公开(公告)号:US11294828B2

    公开(公告)日:2022-04-05

    申请号:US16412674

    申请日:2019-05-15

    Applicant: Arm Limited

    Abstract: An apparatus and method are provided for controlling allocation of information into a cache storage. The apparatus has processing circuitry for executing instructions, and for allowing speculative execution of one or more of those instructions. A cache storage is also provided having a plurality of entries to store information for reference by the processing circuitry, and cache control circuitry is used to control the cache storage, the cache control circuitry comprising a speculative allocation tracker having a plurality of tracking entries. The cache control circuitry is responsive to a speculative request associated with the speculative execution, requiring identified information to be allocated into a given entry of the cache storage, to allocate a tracking entry in the speculative allocation tracker for the speculative request before allowing the identified information to be allocated into the given entry of the cache storage. The allocated tracking entry is employed to maintain restore information sufficient to enable the given entry to be restored to an original state that existed prior to the identified information being allocated into the given entry. The cache control circuitry is further responsive to a mis-speculation condition being detected in respect of the speculative request, to employ the restore information maintained in the allocated tracking entry for that speculative request in order to restore the given entry in the cache storage to the original state. Such an approach can provide robust protection against speculation-based cache timing side-channel attacks whilst alleviating the performance and/or power consumption issues associated with known techniques.

    PREFETCHING TECHNIQUES
    2.
    发明申请

    公开(公告)号:US20200097409A1

    公开(公告)日:2020-03-26

    申请号:US16139160

    申请日:2018-09-24

    Applicant: Arm Limited

    Abstract: A variety of data processing apparatuses are provided in which stride determination circuitry determines a stride value as a difference between a current address and a previously received address. Stride storage circuitry stores an association between stride values determined by the stride determination circuitry and a frequency during a training period. Prefetch circuitry causes a further data value to be proactively retrieved from a further address. The further address is the current address modified by a stride value in the stride storage circuitry having a highest frequency during the training period. The variety of data processing apparatuses are directed towards improving efficiency by variously disregarding certain candidate stride values, considering additional further addresses for prefetching by using multiple stride values, using feedback to adjust the training process and compensating for page table boundaries.

    Hardware resource configuration for processing system

    公开(公告)号:US11966785B2

    公开(公告)日:2024-04-23

    申请号:US16943117

    申请日:2020-07-30

    Applicant: Arm Limited

    CPC classification number: G06F9/5044 G06F9/5038 G06F9/505 G06N5/04 G06N20/00

    Abstract: A method for controlling hardware resource configuration for a processing system comprises obtaining performance monitoring data indicative of processing performance associated with workloads to be executed on the processing system, providing a trained machine learning model with input data depending on the performance monitoring data; and based on an inference made from the input data by the trained machine learning model, setting control information for configuring the processing system to control an amount of hardware resource allocated for use by at least one processor core. A corresponding method of training the model is provided. This is particularly useful for controlling inter-core borrowing of resource between processor cores in a multi-core processing system, where resource is borrowed between respective cores, e.g. cores on different layers of a 3D integrated circuit.

    Multiple stride prefetching
    4.
    发明授权

    公开(公告)号:US10769070B2

    公开(公告)日:2020-09-08

    申请号:US16140625

    申请日:2018-09-25

    Applicant: Arm Limited

    Abstract: Apparatuses and methods for prefetch generation are disclosed. Prefetching circuitry receives addresses specified by load instructions and can cause retrieval of a data value from an address before that address is received. Stride determination circuitry determines stride values as a difference between a current address and a previously received address. Plural stride values corresponding to a sequence of received addresses are determined. Multiple stride storage circuitry stores the plurality of stride values determined by the stride determination circuitry. New address comparison circuitry determines whether a current address corresponds to a matching stride value based on the plurality of stride values stored in the multiple stride storage circuitry. Prefetch initiation circuitry can causes a data value to be retrieved from a further address, wherein the further address is the current address modified by the matching stride value of the plurality of stride values. By the use of multiple stride values, more complex load address patterns can be prefetched.

    Technique for predicting behaviour of control flow instructions

    公开(公告)号:US12182574B2

    公开(公告)日:2024-12-31

    申请号:US18312052

    申请日:2023-05-04

    Applicant: Arm Limited

    Abstract: An apparatus is provided having pointer storage to store pointer values for a plurality of pointers, with the pointer values of the pointers being differentially incremented in response to a series of increment events. Tracker circuitry maintains a plurality of tracker entries, each tracker entry identifying a control flow instruction and a current active pointer (from amongst the pointers) to be associated with that control flow instruction. Cache circuitry maintains a plurality of cache entries, each cache entry storing a resolved behaviour of an instance of a control flow instruction identified by a tracker entry along with an associated tag value generated when the resolved behaviour was allocated into that cache entry. For a given entry the associated tag value may be generated in dependence on an address indication of the control flow instruction whose resolved behaviour is being stored in that entry and the current active pointer associated with that control flow instruction. Prediction circuitry is responsive to a prediction trigger associated with a replay of a given instance of a given control flow instruction identified by a tracker entry, to cause a lookup operation to be performed by the cache circuitry using a comparison tag value generated in dependence on the address indication of the given control flow instruction and the current active pointer. In the event of a hit being detected in a given cache entry, the resolved behaviour stored in the given cache entry is used as the predicted behaviour of the given instance of the given control flow instruction, provided a prediction confidence metric is met.

    System, device and/or process for hashing

    公开(公告)号:US11640381B2

    公开(公告)日:2023-05-02

    申请号:US16519498

    申请日:2019-07-23

    Applicant: Arm Limited

    Abstract: Briefly, example methods, apparatuses, devices, and/or articles of manufacture are disclosed that may be implemented, in whole or in part, using one or more processing devices to facilitate and/or support one or more operations and/or techniques to access entries in a hash table. In a particular implementation, a hash operation may be selected from between or among multiple hash operations to map key values to entries in a hash table.

    Flushing a fetch queue using predecode circuitry and prediction information

    公开(公告)号:US11599361B2

    公开(公告)日:2023-03-07

    申请号:US17315737

    申请日:2021-05-10

    Applicant: Arm Limited

    Abstract: A data processing apparatus is provided. It includes control flow detection prediction circuitry that performs a presence prediction of whether a block of instructions contains a control flow instruction. A fetch queue stores, in association with prediction information, a queue of indications of the instructions and the prediction information comprises the presence prediction. An instruction cache stores fetched instructions that have been fetched according to the fetch queue. Post-fetch correction circuitry receives the fetched instructions prior to the fetched instructions being received by decode circuitry, the post-fetch correction circuitry includes analysis circuitry that causes the fetch queue to be at least partly flushed in dependence on a type of a given fetched instruction and the prediction information associated with the given fetched instruction.

    Updating keys used for encryption of storage circuitry

    公开(公告)号:US12244709B2

    公开(公告)日:2025-03-04

    申请号:US16550598

    申请日:2019-08-26

    Applicant: Arm Limited

    Abstract: A data processing apparatus is provided that includes storage circuitry. Communication circuitry responds to an access request comprising a requested index with an access response comprising requested data. Coding circuitry performs a coding operation using a current key to: translate the requested index to an encoded index of the storage circuitry at which the requested data is stored or to translate encoded data stored at the requested index of the storage circuitry to the requested data. The current key is based on an execution environment. Update circuitry performs an update, in response to the current key being changed, of: the encoded index of the storage circuitry at which the requested data is stored or the encoded data.

    Prefetch mechanism for a cache structure

    公开(公告)号:US11526356B2

    公开(公告)日:2022-12-13

    申请号:US16887442

    申请日:2020-05-29

    Applicant: Arm Limited

    Abstract: An apparatus and method is provided, the apparatus comprising a processor pipeline to execute instructions, a cache structure to store information for reference by the processor pipeline when executing said instructions; and prefetch circuitry to issue prefetch requests to the cache structure to cause the cache structure to prefetch information into the cache structure in anticipation of a demand request for that information being issued to the cache structure by the processor pipeline. The processor pipeline is arranged to issue a trigger to the prefetch circuitry on detection of a given event that will result in a reduced level of demand requests being issued by the processor pipeline, and the prefetch circuitry is configured to control issuing of prefetch requests in dependence on reception of the trigger.

Patent Agency Ranking