摘要:
Processing circuitry 4 has a plurality of exception states EL0-EL3 for handling exception events, the exception states including a base level exception state EL0 and at least one further level exception state EL1-EL3. Each exception state has a corresponding stack pointer indicating the location within the memory of a corresponding stack data store 35. When the processing circuitry is in the base level exception state EL0, stack pointer selection circuitry 40 selects the base level stack pointer as a current stack pointer indicating a current stack data store for use by the processing circuitry 4. When the processing circuitry 4 is a further exception state, the stack pointer selection circuitry 40 selects either the base level stack pointer or the further level stack pointer corresponding to the current further level exception state as a current stack pointer.
摘要:
A processor 4 is provided which supports a first instruction set specifying 32-bit architectural registers and a second instruction set specifying 64-bit architectural registers. Each of these instruction sets is presented with its own set of architectural registers for use. The first set of registers presented to the first instruction set has a one-to-one mapping to the second set of registers presented to this second instruction set. The registers which are provided in hardware are 64-bit registers. In some embodiments, when executing program instructions of the first instruction set, only the least significant portion of these 64-bit registers are accessed and manipulated with the remaining most significant portion of the registers being left unaltered. Register specifying fields within instructions of the first instruction set are decoded together with a current exception mode to determine which architectural register to use whereas the second instruction set uses register specifying fields without a dependence upon exception mode to determine which architectural register are to be used.
摘要:
A data processing apparatus and method are provided for switching performance of a workload between two processing circuits. The data processing apparatus has first processing circuitry which is architecturally compatible with second processing circuitry, but with the first processing circuitry being micro-architecturally different from the second processing circuitry. At any point in time, a workload consisting of at least one application and at least one operating system for running that application is performed by one of the first processing circuitry and the second processing circuitry. A switch controller is responsive to a transfer stimulus to perform a handover operation to transfer performance of the workload from source processing circuitry to destination processing circuitry, with the source processing circuitry being one of the first and second processing circuitry and the destination processing circuitry being the other of the first and second processing circuitry. The switch controller is arranged, during the handover operation, to cause the source processing circuitry to make its current architectural state available to the destination processing circuitry, the current architectural state being that state not available from shared memory shared between the first and second processing circuitry at a time the handover operation is initiated, and that is necessary for the destination processing circuitry to successfully take over performance of the workload from the source processing circuitry. Further, the source processing circuitry and second processing circuitry implement an accelerated mechanism to make the current architectural state available to the destination processing circuitry without routing of the current architectural state via the shared memory. Since the accelerated mechanism is quick and energy efficient, it increases the number of situations it which it is energy efficient to make the switch from one processing circuitry to the other.
摘要:
Apparatus for data processing 2 is provided with processing circuitry 8 which operates in one or more secure modes 40 and one or more non-secure modes 42. When operating in a non-secure mode, one or more regions of the memory are inaccessible. A memory management unit 24 is responsive to page table data to manage accesses to the memory which includes a secure memory 22 and a non-secure memory 6. Secure mode page table data 36, 38 is used when operating in one of the secure modes. A page table entry within the hierarchy of page tables of the secure mode page table data includes a table security field 68, 72 indicating whether or not a further page table pointed to by that page table entry is stored within the secure memory 22 or the non-secure memory 6. If any of the page tables associated with a memory access are stored within the non-secure memory 6, then the memory access is marked with a table attribute bit NST indicating that the memory access should be treated as non-secure.
摘要:
An apparatus for processing data 2 includes a processor 8, a memory 6 and memory control circuitry 12. The processor 8 operates in a plurality of hardware modes including a privileged mode and a user mode. When operating in the privileged mode, the processor 8 is blocked by the memory control circuitry 12 from fetching instructions from memory address regions 34, 38, 42 within the memory 6 which are writeable within the user mode if a security flag within register 46 is set to indicate that this blocking mechanism is active.
摘要:
Access to memory address space is controlled by memory access control circuitry using access control data. The ability to change the access control data is controlled by domain control circuitry. Whether or not an instruction stored within a particular domain, being a set of memory addresses, is able to modify the access control data is dependent upon the domain concerned. Thus, the ability to change access control data can be restricted to instructions stored within particular defined locations within the memory address space thereby enhancing security. This capability allows systems to be provided in which call forwarding to an operating system can be enforced via call forwarding code and where trusted regions of the memory address space can be established into which a secure operating system may write data with increased confidence that that data will only be accessible by trusted software executing under control of a non-secure operating system.
摘要:
A data processor core 10 comprising a memory access interface portion 30 operable to perform data transfer operations between an external data source and at least one memory associated with said data processor core and a data processing portion 12 operable to perform further data processing operations in response to receipt of said processor clock signal CLK. The two portions of the core being operable to be independently enabled such that one portion may be active while the other is inactive.
摘要:
A data processing apparatus 2 supports multiple memory access program instructions LDM, STM which serve to load data values from multiple program registers 16 to respective memory locations or to store data values from multiple memory locations to respective program registers. A memory management unit 8 within the system stores device or strongly ordered memory attribute values which control whether or not a multiple memory access instruction involving such a memory location may be early terminated when an interrupt is received during its operation. Early termination is permitted in those circumstances where the multiple memory access instruction may be safely restarted and rerun in its entirety, whereas early termination is not permitted and the operation completes before the interrupt is taken in those circumstances where the memory locations are subject to a guaranteed number of memory accesses as this appears within the controlling program instructions.
摘要:
The present invention provides an apparatus and method for synchronizing a first pipeline and a second pipeline of a processor arranged to execute a sequence of instructions. The processor is arranged to route an instruction in the sequence through either the first or the second pipeline dependent on predetermined criteria, each pipeline having a plurality of pipeline stages including a retirement stage. Counter logic is provided for maintaining a first counter relating to the first pipeline and a second counter relating to the second pipeline. For each instruction in the first pipeline a determination is made as to when that instruction reaches a point within the first pipeline where an exception status of that instruction is resolved, and the counter logic is arranged to increment the first counter responsive to such determination. The processor is arranged to generate an indication within the second pipeline each time an instruction is routed to the first pipeline, and the counter logic is further arranged to increment the second counter responsive to that indication. Synchronisation logic is then provided which is arranged, when an instruction is in the retirement stage of the second pipeline, to determine with reference to the values of the first and second counters whether that instruction can be retired. If so, the retirement stage is arranged to cause an update of a state of the data processing apparatus dependent on the result of execution of that instruction.
摘要:
The present invention provides an apparatus and method for accessing data values in a cache and in particular accessing data values in an ‘n’ way set associative cache. A data processing apparatus is provided comprising an ‘n’ way set-associative cache, each cache way having a plurality of entries for storing a corresponding plurality of data values. A cache controller is provided which is operable on receipt of an access request for a data value to determine whether that data value is accessible within the cache, the cache comprising cache access logic operable under the control of the cache controller to determine whether a data value the subject of an access request is accessible in one of the cache ways. Also provided is a way lookup cache arranged to store an indication of the cache way in which a number of the plurality of data values stored in the cache are accessible. The cache controller is operable, when an access request for a data value specifies a non-sequential access, to reference the way lookup cache to determine whether that data value is identified in the way lookup cache and, if so, the cache controller being further operable to suppress the operation of the cache access logic and to cause that data value to be accessed. The provision of a way lookup cache enables the power consumption of the cache to be reduced by enabling the operation of the cache access logic to be suppressed.