Abstract:
An apparatus and method are described, the apparatus comprising processing circuitry to perform data processing operations, microarchitecture circuitry used by the processing circuitry during performance of the data processing operations, and an interface to receive interrupt requests. The processing circuitry is responsive to a received interrupt request to perform an interrupt service routine, and the apparatus comprises prediction circuitry to determine a predicted time of reception of a next interrupt of at least one given type. The apparatus also comprises microarchitecture control circuitry arranged to vary a configuration of the microarchitecture circuitry between a performance based configuration and a responsiveness based configuration in dependence on the predicted time, so as to seek to increase the responsiveness of the apparatus to interrupts as the predicted time is approached.
Abstract:
Instruction queue circuitry maintains an instruction queue to store fetched instructions. Instruction decode circuitry decodes instructions dispatched from the queue. The instruction decode circuitry allocates processor resource(s) for use in execution of the decoded instruction. Detection circuitry detect, for an instruction to be dispatched from a given instruction queue, a prediction indicating whether sufficient processor resources are predicted to be available for allocation to that instruction by the instruction decode circuitry. Dispatch circuitry dispatches an instruction from the queue to the instruction decode circuitry and allows deletion of the dispatched instruction from that instruction queue when the prediction indicates that sufficient processor resources are predicted to be available for allocation to that instruction by the instruction decode circuitry.
Abstract:
A sequence of buffered instructions includes branch instructions. Branch prediction circuitry predicts if each branch instruction will result in a taken branch when executed. Normally, the fetch circuitry retrieves speculative instructions between the time that a source branch instruction is retrieved and the prediction if that source branch instruction will result in the taken branch. If the source branch instruction is predicted as taken, then the speculative instructions are discarded, and a count value indicates a number of instructions in the sequence between that source branch instruction and a subsequent branch instruction in the sequence that is also predicted as taken. Responsive to a subsequent occurrence of the source branch instruction predicted as taken, a throttled mode limits the number of instructions subsequently retrieved dependent on the count value, and then any further instructions are not retrieved for a number of clock cycles.
Abstract:
A processor has a processing pipeline with first, second and third stages. An instruction at the first stage takes fewer cycles to reach the second stage then the third stage. The second and third stages each have a duplicated processing resource. For a pending instruction which requires the duplicated resource and can be processed using the duplicated resource at either of the second and third stages, the first stage determines whether a required operand would be available when the pending instruction would reach the second stage. If the operand would be available, then the pending instruction is processed using the duplicated resource at the second stage, while if the operand would not be available in time then the instruction is processed using the duplicated resource in the third pipeline stage. This technique helps to reduce delays caused by data dependency hazards.
Abstract:
A data processing apparatus is provided which controls the use of data in respect of a further operation. The data processing apparatus identifies whether data is trusted or untrusted by identifying whether or not the data was determined by a speculatively executed resolve-pending operation. A permission control unit is also provided to control how the data can be used in respect of a further operation according to a security policy while the speculatively executed operation is still resolve-pending.
Abstract:
An apparatus 2 has a processing pipeline 4 supporting at least a first processing mode and a second processing mode with different energy consumption or performance characteristics. A storage structure 22, 30, 36, 50, 40, 64, 44 is accessible in both the first and second processing modes. When the second processing mode is selected, control circuitry 70 triggers a subset 102 of the entries of the storage structure to be placed in a power saving state.
Abstract:
An apparatus 2 has a processing pipeline 4 supporting at least a first processing mode and a second processing mode with different energy consumption or performance characteristics. A storage structure 22, 30, 36, 50, 40, 64, 44 is accessible in both the first and second processing modes. When the second processing mode is selected, control circuitry 70 triggers a subset 102 of the entries of the storage structure to be placed in a power saving state.
Abstract:
An apparatus and method are provided for controlling allocation of data into cache storage. The apparatus comprises processing circuitry for executing instructions, and a cache storage for storing data accessed when executing the instructions. Cache control circuitry is arranged, while a sensitive allocation condition is determined to exist, to be responsive to the processing circuitry speculatively executing a memory access instruction that identifies data to be allocated into the cache storage, to allocate the data into the cache storage and to set a conditional allocation flag in association with the data allocated into the cache storage. The cache control circuitry is then responsive to detecting an allocation resolution event, to determine based on the type of the allocation resolution event whether to clear the conditional allocation flag such that the data is thereafter treated as unconditionally allocated, or to cause invalidation of the data in the cache storage. Such an approach can reduce the vulnerability of a cache to speculation-based cache timing side-channel attacks.
Abstract:
A data processing apparatus comprises branch prediction circuitry adapted to store at least one branch prediction state entry in relation to a stream of instructions, input circuitry to receive at least one input to generate a new branch prediction state entry, wherein the at least one input comprises a plurality of bits; and coding circuitry adapted to perform an encoding operation to encode at least some of the plurality of bits based on a value associated with a current execution environment in which the stream of instructions is being executed. This guards against potential attacks which exploit the ability for branch prediction entries trained by one execution environment to be used by another execution environment as a basis for branch predictions.
Abstract:
Apparatus for processing data is provided with fetch circuitry for fetching program instructions for execution from one or more active threads of instructions having respective program counter values. Pipeline circuitry has a first operating mode and a second operating mode. Mode switching circuitry switches the pipeline circuitry, between the first operating mode and the second operating mode in dependence upon a number of active threads of program instructions having program instructions available to be executed. The first operating mode has a lower average energy consumption per instruction executed than the second operating mode and the second operating mode has a higher average rate of instruction execution for a single thread than the first operating mode. The first operating mode may utilise a barrel processing pipeline to perform interleaved multiple thread processing. The second operating mode may utilise an out-of-order processing pipeline for performing out-of-order processing.