INSTRUCTION ORDERING
    2.
    发明申请

    公开(公告)号:US20220004390A1

    公开(公告)日:2022-01-06

    申请号:US17593018

    申请日:2019-11-26

    Abstract: A data processing apparatus includes obtain circuitry that obtains a stream of instructions. The stream of instructions includes a barrier creation instruction and a barrier inhibition instruction. Track circuitry orders sending each instruction in the stream of instructions to processing circuitry based on one or more dependencies. The track circuitry is responsive to the barrier creation instruction to cause the one or more dependencies to include one or more barrier dependencies in which pre-barrier instructions, occurring before the barrier creation instruction in the stream, are sent before post-barrier instructions, occurring after the barrier creation instruction in the stream, are sent. The track circuitry is also responsive to the barrier inhibition instruction to relax the barrier dependencies to permit post-inhibition instructions, occurring after the barrier inhibition instruction in the stream, to be sent before the pre-barrier instructions.

    INSTRUCTION ORDERING
    3.
    发明申请

    公开(公告)号:US20200285479A1

    公开(公告)日:2020-09-10

    申请号:US16296507

    申请日:2019-03-08

    Abstract: A data processing apparatus includes obtain circuitry that obtains a stream of instructions. The stream of instructions includes a barrier creation instruction and a barrier inhibition instruction. Track circuitry orders sending each instruction in the stream of instructions to processing circuitry based on one or more dependencies. The track circuitry is responsive to the barrier creation instruction to cause the one or more dependencies to include one or more barrier dependencies in which pre-barrier instructions, occurring before the barrier creation instruction in the stream, are sent before post-barrier instructions, occurring after the barrier creation instruction in the stream, are sent. The track circuitry is also responsive to the barrier inhibition instruction to relax the barrier dependencies to permit post-inhibition instructions, occurring after the barrier inhibition instruction in the stream, to be sent before the pre-barrier instructions.

    EXCEPTION HANDLING IN TRANSACTIONS

    公开(公告)号:US20210103503A1

    公开(公告)日:2021-04-08

    申请号:US17046396

    申请日:2019-04-08

    Applicant: Arm Limited

    Abstract: An apparatus and a method of operating a data processing apparatus, and simulators thereof, are disclosed. Data processing circuitry performs data processing operations in response to instructions, where some sets of instructions may be defined as a transaction which are to be performed atomically with respect to other operations performed by the data processing circuitry. When a synchronous exception occurs during a transaction the transaction is aborted and an exception counter is incremented. When the counter reaches a threshold value a transaction failure signal is generated, allowing, if appropriate a response to this number of exceptions causing transaction aborts to be carried out.

    APPARATUS AND METHOD OF HANDLING CACHING OF PERSISTENT DATA

    公开(公告)号:US20200264980A1

    公开(公告)日:2020-08-20

    申请号:US16865642

    申请日:2020-05-04

    Applicant: ARM Limited

    Abstract: An apparatus and method are provided for handling caching of persistent data. The apparatus comprises cache storage having a plurality of entries to cache data items associated with memory address in a non-volatile memory. The data items may comprise persistent data items and non-persistent data items. Write back control circuitry is used to control write back of the data items from the cache storage to the non-volatile memory. In addition, cache usage determination circuitry is used to determine, in dependence on information indicative of capacity of a backup energy source, a subset of the plurality of entries to be used to store persistent data items. In response to an event causing the backup energy source to be used, the write back control circuitry is then arranged to initiate write back to the non-volatile memory of the persistent data items cached in the subset of the plurality of entries. By constraining the extent to which the cache storage is allowed to store persistent data items, taking into account the capacity of the backup energy source, the persistence of those data items can then be guaranteed in the event of the backup energy source being triggered, for example due to removal of the primary energy source for the apparatus.

    TRANSACTION NESTING DEPTH TESTING INSTRUCTION

    公开(公告)号:US20200257551A1

    公开(公告)日:2020-08-13

    申请号:US16651045

    申请日:2018-08-21

    Applicant: Arm Limited

    Abstract: In a system providing transactional memory support, a transaction nesting depth testing instruction is provided for triggering processing circuitry to set at least one status value to one of a plurality of states depending on a transaction nesting depth indicative of a number of executed transaction start instructions of a given thread for which the corresponding transaction remains unaborted and uncommitted, the plurality of states including a first state selected when the transaction nesting depth is and at least one further state selected when the transaction nesting depth is greater than or less than. The ISA supported enables the setting of the at least one status value and a conditional branch conditional on the at least one status value being in the first state to be performed in response to a single transaction nesting depth testing instruction and a single conditional branch instruction.

    PERFORMING MAINTENANCE OPERATIONS
    7.
    发明申请

    公开(公告)号:US20190155747A1

    公开(公告)日:2019-05-23

    申请号:US16169206

    申请日:2018-10-24

    Applicant: Arm Limited

    Abstract: There is provided an apparatus that includes an input port to receive, from a requester, any one of: a lookup operation comprising an input address, and a maintenance operation. Maintenance queue circuitry stores a maintenance queue of at least one maintenance operation and address storage stores a translation between the input address and an output address in an output address space. In response to receiving the input address, the output address is provided in dependence on the maintenance queue. In response to storing the maintenance operation, the maintenance queue circuitry causes an acknowledgement to be sent to the requester. By providing a separate maintenance queue for performing the maintenance operation, there is no need for a requester to be blocked while maintenance is performed.

    AN APPARATUS AND METHOD TO GENERATE TRACE DATA IN RESPONSE TO TRANSACTIONAL EXECUTION

    公开(公告)号:US20180260227A1

    公开(公告)日:2018-09-13

    申请号:US15555239

    申请日:2016-02-11

    Applicant: ARM LIMITED

    Abstract: There is provided an apparatus comprising processing circuitry to execute a transaction comprising a number of program instructions that execute to generate updates to state data, to commit the updates if the transaction completes without a conflict, and to generate trace control signals during execution of the number of program instructions. The processing circuitry uses at least one resource during execution of the program instructions. Transaction trace circuitry generates trace items in response to the trace control signals. In response to the trace control signals indicating that a change in a usage level of the at least one resource has occurred during execution of the program instructions, the transaction trace circuitry generates at least one trace item that indicates the usage level of the at least one resource.

    A DATA PROCESSING APPARATUS, AND A METHOD OF HANDLING ADDRESS TRANSLATION WITHIN A DATA PROCESSING APPARATUS

    公开(公告)号:US20170185528A1

    公开(公告)日:2017-06-29

    申请号:US15325250

    申请日:2015-06-22

    Applicant: ARM LIMITED

    Abstract: A data processing apparatus and method are provided for performing address translation in response to a memory access request issued by processing circuitry of the data processing apparatus and specifying a virtual address for a data item. Address translation circuitry performs an address translation process with reference to at least one descriptor provided by at least one page table, in order to produce a modified memory access request specifying a physical address for the data item. The address translation circuitry includes page table walk circuitry configured to generate at least one page table walk request in order to retrieve the at least one descriptor required for the address translation process. In addition, walk ahead circuitry is located in a path between the address translation circuitry and a memory device containing the at least one page table. The walk ahead circuitry comprises detection circuitry used to detect a memory page table walk request generated by the page table walk circuitry of the address translation circuitry for a descriptor in a page table. In addition, the walk ahead circuitry has further request generation circuitry which is used to generate a prefetch memory request in order to prefetch data from the memory device at a physical address determined with reference to the descriptor requested by the detected memory page table walk request. This prefetched data may be another descriptor required as part of the address translation process, or may be the actual data item being requested by the processing circuitry. Such an approach can significantly reduce latency associated with the address translation process.

    CACHE CONTROL IN PRESENCE OF SPECULATIVE READ OPERATIONS

    公开(公告)号:US20210042227A1

    公开(公告)日:2021-02-11

    申请号:US16979624

    申请日:2019-03-12

    Applicant: Arm Limited

    Abstract: Coherency control circuitry (10) supports processing of a safe-speculative-read transaction received from a requesting master device (4). The safe-speculative-read transaction is of a type requesting that target data is returned to a requesting cache (11) of the requesting master device (4) while prohibiting any change in coherency state associated with the target data in other caches (12) in response to the safe-speculative-read transaction. In response, at least when the target data is cached in a second cache associated with a second master device, at least one of the coherency control circuitry (10) and the second cache (12) is configured to return a safe-speculative-read response while maintaining the target data in the same coherency state within the second cache. This helps to mitigate against speculative side-channel attacks.

Patent Agency Ranking