Abstract:
A technique is provided for training a prediction apparatus. The apparatus has an input interface for receiving a sequence of training events indicative of program instructions, and identifier value generation circuitry for performing an identifier value generation function to generate, for a given training event received at the input interface, an identifier value for that given training event. The identifier value generation function is arranged such that the generated identifier value is dependent on at least one register referenced by a program instruction indicated by that given training event. Prediction storage is provided with a plurality of training entries, where each training entry is allocated an identifier value as generated by the identifier value generation function, and is used to maintain training data derived from training events having that allocated identifier value. Matching circuitry is then responsive to the given training event to detect whether the prediction storage has a matching training entry (i.e. an entry whose allocated identifier value matches the identifier value for the given training event). If so, it causes the training data in the matching training entry to be updated in dependence on the given training event.
Abstract:
A data processing apparatus comprises branch prediction circuitry adapted to store at least one branch prediction state entry in relation to a stream of instructions, input circuitry to receive at least one input to generate a new branch prediction state entry, wherein the at least one input comprises a plurality of bits; and coding circuitry adapted to perform an encoding operation to encode at least some of the plurality of bits based on a value associated with a current execution environment in which the stream of instructions is being executed. This guards against potential attacks which exploit the ability for branch prediction entries trained by one execution environment to be used by another execution environment as a basis for branch predictions.
Abstract:
A data processing apparatus is provided in which a processor unit accesses data values stored in a memory and a cache stores local copies of a subset of the data values. The cache maintains a status value for each local copy stored in the cache. When the processor unit executes a load-exclusive operation, a first data value is loaded from a specified memory location and an exclusive use monitor begins monitoring the specified memory location for accesses. When the processor unit executes a store-exclusive operation, a second data value is stored to the specified memory location if the exclusive use monitor indicates that the first data value has not been modified since the load-exclusive operation was executed. When a local copy of the first data value is stored in the cache and the status value for the local copy of the first data value indicates that the processor unit has exclusive usage of the first data value, the data processing apparatus is configured to prevent modification of the status value for a predetermined time period after the processor unit has executed the load-exclusive operation.
Abstract:
Data processing apparatuses, methods of data processing, and non-transitory computer-readable media on which computer-readable code is stored defining logical configurations of processing devices are disclosed. In an apparatus, fetch circuitry retrieves a sequence of instructions and execution circuitry performs data processing operations with respect to data values in a set of registers. An auxiliary execution circuitry interface and a coprocessor interface to provide a connection to a coprocessor outside the apparatus are provided. Decoding circuitry generates control signals in dependence on the instructions of the sequence of instructions, wherein the decoding circuitry is responsive to instructions in a first subset of an instruction encoding space to generate the control signals to control the execution circuitry to perform the first data processing operations, and the decoding circuitry is responsive to instructions in a second subset of the instruction encoding space to generate the control signals in dependence on a configuration condition: to generate the control signals for the auxiliary execution circuitry interface when the configuration condition has a first state; and to generate the control signals for the coprocessor interface when the configuration condition has a second state.
Abstract:
An apparatus and method are provided for performing branch prediction. The apparatus has processing circuitry for executing instructions, and branch prediction circuitry for making branch outcome predictions in respect of branch instructions. The branch prediction circuitry includes loop prediction circuitry having a plurality of entries, where each entry is used to maintain branch outcome prediction information for a loop controlling branch instruction that controls repeated execution of a loop comprising a number of instructions. The branch prediction circuitry is arranged to analyse blocks of instructions and to produce a prediction result for each block that is dependent on branch outcome predictions made for any branch instructions appearing in the associated block. A prediction queue then stores the prediction results produced by the branch prediction circuitry in order to determine the instructions to be executed by the processing circuitry. When the block of instructions being analysed comprises a loop controlling branch instruction that has an active entry in the loop prediction circuitry, and a determined condition is detected in respect of the associated loop, the loop prediction circuitry is arranged to produce a prediction result that identifies multiple iterations of the loop. This can significantly boost prediction bandwidth for certain types of loop.
Abstract:
An apparatus and method are provided for performing register renaming. Available register identifying circuitry is provided to identify which physical registers form a pool of physical registers available to be mapped by register renaming circuitry to an architectural register specified by an instruction to be executed. Configuration data whose value is modified during operation of the processing circuitry is stored such that, when the configuration data has a first value, the configuration data identifies at least one architectural register of the architectural register set which does not require mapping to a physical register by the register renaming circuitry. The register identifying circuitry is arranged to reference the modified data value, such that when the configuration data has the first value, the number of physical registers in the pool is increased due to the reduction in the number of architectural registers which require mapping to physical registers.
Abstract:
A data processing apparatus is provided which controls the use of data in respect of a further operation. The data processing apparatus identifies whether data is trusted or untrusted by identifying whether or not the data was determined by a speculatively executed resolve-pending operation. A permission control unit is also provided to control how the data can be used in respect of a further operation according to a security policy while the speculatively executed operation is still resolve-pending.
Abstract:
A data processing apparatus comprises receiver circuitry for receiving instructions from each of a plurality of requester devices. Processing circuitry executes the instructions associated with each of a subset of the requester devices at a time and arbitration circuitry determines the subset of the requester devices and causes the instructions associated with each of the subset of the requester devices to be executed next. In response to the receiver circuitry receiving an instruction of a predetermined type from one of the requester devices outside the subset of requester devices, the arbitration circuitry causes the instruction of the predetermined type to be executed next.
Abstract:
Apparatuses and methods are disclosed for performing data processing operations in main processing circuitry and delegating certain tasks to auxiliary processing circuitry. User-specified instructions executed by the main processing circuitry comprise a task dispatch specification specifying an indication of the auxiliary processing circuitry and multiple data words defining a delegated task comprising at least one virtual address indicator. In response to the task dispatch specification the main processing circuitry performs virtual-to-physical address translation with respect to the at least one virtual address indicator to derive at least one physical address indicator, and issues a task dispatch memory write transaction to the auxiliary processing circuitry comprises the indication of the auxiliary processing circuitry and the multiple data words, wherein the at least one virtual address indicator in the multiple data words is substituted by the at least one physical address indicator.
Abstract:
An apparatus and method are provided for controlling allocation of data into cache storage. The apparatus comprises processing circuitry for executing instructions, and a cache storage for storing data accessed when executing the instructions. Cache control circuitry is arranged, while a sensitive allocation condition is determined to exist, to be responsive to the processing circuitry speculatively executing a memory access instruction that identifies data to be allocated into the cache storage, to allocate the data into the cache storage and to set a conditional allocation flag in association with the data allocated into the cache storage. The cache control circuitry is then responsive to detecting an allocation resolution event, to determine based on the type of the allocation resolution event whether to clear the conditional allocation flag such that the data is thereafter treated as unconditionally allocated, or to cause invalidation of the data in the cache storage. Such an approach can reduce the vulnerability of a cache to speculation-based cache timing side-channel attacks.