APPARATUS AND METHOD FOR SPECULATIVE EXECUTION OF INSTRUCTIONS

    公开(公告)号:US20210019150A1

    公开(公告)日:2021-01-21

    申请号:US16514124

    申请日:2019-07-17

    Applicant: Arm Limited

    Abstract: Apparatuses for data processing and methods of data processing are provided. A data processing apparatus performs data processing operations in response to a sequence of instructions including performing speculative execution of at least some of the sequence of instructions. In response to a branch instruction the data processing apparatus predicts whether or not the branch is taken or not taken further speculative instruction execution is based on that prediction. A path speculation cost is calculated in dependence on a number of recently flushed instructions and a rate at which speculatively executed instructions are issued may be modified based on the path speculation cost.

    TECHNIQUE FOR DETERMINING ADDRESS TRANSLATION DATA TO BE STORED WITHIN AN ADDRESS TRANSLATION CACHE

    公开(公告)号:US20190188149A1

    公开(公告)日:2019-06-20

    申请号:US15848397

    申请日:2017-12-20

    Applicant: Arm Limited

    Abstract: An apparatus and method are provided for determining address translation data to be stored within an address translation cache. The apparatus comprises an address translation cache having a plurality of entries, where each entry stores address translation data used when converting a virtual address into a corresponding physical address of a memory system. Control circuitry is used to perform an allocation process to determine the address translation data to be stored in each entry. Via an interface of the apparatus, access requests are received from a request source, where each access request identifies a virtual address. Prefetch circuitry is responsive to a contiguous access condition being detected from the access requests received by the interface, to retrieve one or more descriptors from a page table, where each descriptor is associated with a virtual page, in order to produce candidate coalesced address translation data relating to multiple contiguous virtual pages. At an appropriate point, the prefetch circuitry triggers the control circuitry to allocate, into a selected entry of the address translation cache, coalesced address translation data that is derived from the candidate coalesced address translation data. Such an approach has been found to provide a particularly efficient mechanism for creating coalesced address translation data for allocating into the address translation cache, without impacting the latency of the servicing of ongoing requests from the request source.

    CACHE HIERARCHY MANAGEMENT
    16.
    发明申请

    公开(公告)号:US20180293166A1

    公开(公告)日:2018-10-11

    申请号:US15479348

    申请日:2017-04-05

    Applicant: ARM Limited

    Abstract: A cache hierarchy and a method of operating the cache hierarchy are disclosed. The cache hierarchy comprises a first cache level comprising an instruction cache, and predecoding circuitry to perform a predecoding operation on instructions having a first encoding format retrieved from memory to generate predecoded instructions having a second encoding format for storage in the instruction cache. The cache hierarchy further comprises a second cache level comprising a cache and the first cache level instruction cache comprises cache control circuitry to control an eviction procedure for the instruction cache in which a predecoded instruction having the second encoding format which is evicted from the instruction cache is stored at the second cache level in the second encoding format. This enables the latency and power cost of the predecoding operation to be avoided when the predecoded instruction is then retrieved from the second cache level for storage in the first level instruction cache again.

    TECHNIQUE FOR EFFICIENT UTILISATION OF AN ADDRESS TRANSLATION CACHE

    公开(公告)号:US20180239714A1

    公开(公告)日:2018-08-23

    申请号:US15437581

    申请日:2017-02-21

    Applicant: ARM Limited

    Abstract: An apparatus and method are provided for making efficient use of address translation cache resources. The apparatus has an address translation cache having a plurality of entries, where each entry is used to store address translation data used when converting a virtual address into a corresponding physical address of a memory system. Each item of address translation data has a page size indication for a page within the memory system that is associated with that address translation data. Allocation circuitry performs an allocation process to determine the address translation data to be stored in each entry. Further, mode control circuitry is used to switch a mode of operation of the apparatus between a non-skewed mode and at least one skewed mode, dependent on a page size analysis operation. The address translation cache is organised as a plurality of portions, and in the non-skewed mode the allocation circuitry is arranged, when performing the allocation process, to permit the address translation data to be allocated to any of the plurality of portions. In contrast, when in the at least one skewed mode, the allocation circuitry is arranged to reserve at least one portion for allocation of address translation data associated with pages of a first page size and at least one other portion for allocation of address translation data associated with pages of a second page size different to the first page size.

    SYSTEM, METHOD AND APPARATUS FOR EXECUTING INSTRUCTIONS

    公开(公告)号:US20200057640A1

    公开(公告)日:2020-02-20

    申请号:US16103995

    申请日:2018-08-16

    Applicant: Arm Limited

    Abstract: A system, apparatus and method for ordering a sequence of processing transactions. The method includes accessing, from a memory, a program sequence of operations that are to be executed. Instructions are received, some of them having an identifier, or mnemonic, that is used to distinguish those identified operations from other operations that do not have an identifier, or mnemonic. The mnemonic indicates a distribution of the execution of the program sequence of operations. The program sequence of operations is grouped based on the mnemonic such that certain operations are separated from other operations.

Patent Agency Ranking