摘要:
A data processing apparatus and method are provided for switching performance of a workload between two processing circuits. The data processing apparatus has first processing circuitry which is architecturally compatible with second processing circuitry, but with the first processing circuitry being micro-architecturally different from the second processing circuitry. At any point in time, a workload consisting of at least one application and at least one operating system for running that application is performed by one of the first processing circuitry and the second processing circuitry. A switch controller is responsive to a transfer stimulus to perform a handover operation to transfer performance of the workload from source processing circuitry to destination processing circuitry, with the source processing circuitry being one of the first and second processing circuitry and the destination processing circuitry being the other of the first and second processing circuitry. The switch controller is arranged, during the handover operation, to cause the source processing circuitry to make its current architectural state available to the destination processing circuitry, the current architectural state being that state not available from shared memory shared between the first and second processing circuitry at a time the handover operation is initiated, and that is necessary for the destination processing circuitry to successfully take over performance of the workload from the source processing circuitry. Further, the source processing circuitry and second processing circuitry implement an accelerated mechanism to make the current architectural state available to the destination processing circuitry without routing of the current architectural state via the shared memory. Since the accelerated mechanism is quick and energy efficient, it increases the number of situations it which it is energy efficient to make the switch from one processing circuitry to the other.
摘要:
A data processing apparatus is provided comprising a processing unit for executing instructions, a cache structure for storing instructions retrieved from memory for access by the processing unit, and profiling logic for identifying a sequence of instructions that is functionally equivalent to an accelerator instruction. When such a sequence of instructions is identified, the equivalent accelerator instruction is stored in the cache structure as a replacement for the first instruction of the sequence, with the remaining instructions in the sequence of instructions being stored unchanged. The accelerator instruction includes an indication to cause the processing unit to skip the remainder of the sequence when executing the accelerator instruction.
摘要:
A data processing and method are provided for pre-decoding instructions. The data processing apparatus has pre-decoding circuitry for receiving instructions fetched from a memory and for performing a pre-decoding operation to generate corresponding pre-decoded instructions, which are then stored in the cache for access by the processing circuitry. If a pre-decoded instruction crosses a cache line boundary, then checking circuitry in respect of selected types of pre-decoded instruction checks for consistency between the first portion of the pre-decoded instruction stored within a first cache line and a contiguous second portion of the pre-decoded instruction stored within a second cache line. If this consistency check is passed such that the two portions are self-consistent, then the pre-decoded instruction can be further decoded and issued. If the consistency check is failed, or the pre-decoded instruction is not of a type for which consistency checking is supported, then re-generation of the pre-decoded instruction is triggered.
摘要:
A data processing apparatus and method are provided for handling instructions to be executed by processing circuitry. The processing circuitry has a plurality of processor states, each processor state having a different instruction set associated therewith. Pre-decoding circuitry receives the instructions fetched from the memory and performs a pre-decoding operation to generate corresponding pre-decoded instructions, with those pre-decoded instructions then being stored in a cache for access by the processing circuitry. The pre-decoding circuitry performs the pre-decoding operation assuming a speculative processor state, and the cache is arranged to store an indication of the speculative processor state in association with the pre-decoded instructions. The processing circuitry is then arranged only to execute an instruction in the sequence using the corresponding pre-decoded instruction from the cache if a current processor state of the processing circuitry matches the indication of the speculative processor state stored in the cache for that instruction. This provides a simple and effective mechanism for detecting instructions that have been corrupted by the pre-decoding operation due to an incorrect assumption of processor state.
摘要:
A data processing apparatus is provided comprising a processing unit for executing instructions, a cache structure for storing instructions retrieved from memory for access by the processing unit, and profiling logic for identifying a sequence of instructions that is functionally equivalent to an accelerator instruction. When such a sequence of instructions is identified, the equivalent accelerator instruction is stored in the cache structure as a replacement for the first instruction of the sequence, with the remaining instructions in the sequence of instructions being stored unchanged. The accelerator instruction includes an indication to cause the processing unit to skip the remainder of the sequence when executing the accelerator instruction.
摘要:
In response to a transfer stimulus, performance of a processing workload is transferred from a source processing circuitry to a destination processing circuitry, in preparation for the source processing circuitry to be placed in a power saving condition following the transfer. To reduce the number of memory fetches required by the destination processing circuitry following the transfer, a cache of the source processing circuitry is maintained in a powered state for a snooping period. During the snooping period, cache snooping circuitry snoops data values in the source cache and retrieves the snoop data values for the destination processing circuitry.
摘要:
A data processing apparatus is provided with pre-decoding circuitry 10 serving to generate pre-decoded instructions which are stored within an instruction cache 20. The pre-decoded instructions from the instruction cache 20 are read by decoding circuitry 45, 50, 46 and used to form control signals for controlling processing operations corresponding to the pre-decoded instructions. The program instructions originally fetched can belong to respective ones of a plurality of instruction sets. Instructions from one instruction set are pre-decoded by the pre-decoding circuitry 10 into pre-decoded instructions having a shared format to represent shared functionality with corresponding instructions taken from another of the instruction sets. In this way, a shared portion of the decoding circuitry can generate control signals with respect to the shared functionality of instructions from both of these different instruction sets.
摘要:
A data processing and method are provided for pre-decoding instructions. The data processing apparatus has pre-decoding circuitry for receiving instructions fetched from a memory and for performing a pre-decoding operation to generate corresponding pre-decoded instructions, which are then stored in the cache for access by the processing circuitry. If a pre-decoded instruction crosses a cache line boundary, then checking circuitry in respect of selected types of pre-decoded instruction checks for consistency between the first portion of the pre-decoded instruction stored within a first cache line and a contiguous second portion of the pre-decoded instruction stored within a second cache line. If this consistency check is passed such that the two portions are self-consistent, then the pre-decoded instruction can be further decoded and issued. If the consistency check is failed, or the pre-decoded instruction is not of a type for which consistency checking is supported, then re-generation of the pre-decoded instruction is triggered.
摘要:
A data processing apparatus and method are provided for pre-decoding instructions. The data processing apparatus has pre-decoding circuitry for receiving instructions fetched from memory and for performing a pre-decoding operation to generate corresponding pre-decoded instructions, which are then stored in a cache for access by processing circuitry. For a first set of instructions, each instruction comprises a plurality of instruction portions, and the pre-decoding circuitry generates a corresponding pre-decoded instruction comprising a plurality of pre-decoded instruction portions. If when applying the pre-decoding operation to an instruction in the first set, the pre-decoding circuitry does not have access to all of the plurality of instruction portions of that instruction, the pre-decoding circuitry is arranged to provide in association with at least one pre-decoded instruction portion that it does generate, an indication that the pre-decoded instruction portion relates to an incomplete pre-decoding operation. This provides a simple and effective mechanism for detecting situations where a pre-decoded instruction as later read from the cache may have become corrupted by the pre-decoding operation, and accordingly should not be relied upon.
摘要:
A data processing apparatus and method are provided for handling instructions to be executed by processing circuitry. The processing circuitry has a plurality of processor states, each processor state having a different instruction set associated therewith. Pre-decoding circuitry receives the instructions fetched from the memory and performs a pre-decoding operation to generate corresponding pre-decoded instructions, with those pre-decoded instructions then being stored in a cache for access by the processing circuitry. The pre-decoding circuitry performs the pre-decoding operation assuming a speculative processor state, and the cache is arranged to store an indication of the speculative processor state in association with the pre-decoded instructions. The processing circuitry is then arranged only to execute an instruction in the sequence using the corresponding pre-decoded instruction from the cache if a current processor state of the processing circuitry matches the indication of the speculative processor state stored in the cache for that instruction. This provides a simple and effective mechanism for detecting instructions that have been corrupted by the pre-decoding operation due to an incorrect assumption of processor state.