Abstract:
A data processing apparatus has at least one memory and processing circuitry. A memory built-in self-test (MBIST) interface receives a MBIST request indicating that a test procedure is to be performed for testing at least one target memory location. Control circuitry detects the MBIST request and reserves for testing at least one reserved memory location including the target memory location. During the test procedure, the memory continues servicing memory transactions issued by the processing circuitry that target a memory location other than the reserved location reserved by the control circuitry. The processing circuitry is stalled if it attempts to access a reserved memory location. Testing consists of short bursts of transactions which occur infrequently. In this way, MBIST testing may continue while the processor is operation in the field with reduced performance impact.
Abstract:
A target apparatus 2 for debug includes a processing pipeline 18 for executing a sequence of program instructions. A debug interface 26 receives debug command signals corresponding directly or indirectly to debug program instructions to be executed. An instruction buffer 24 stores both the debug program instructions and non-debug program instructions. An arbiter 30 selects between both the debug program instructions and the non-debug program instructions stored within the instruction buffer to form the sequence of program instructions to be executed by the processing pipeline. A complex coherent memory system 4, 6, 8, 10, 12, 14, 32 is shared by the debug program instructions and the non-debug program instructions such that they obtain the same coherent view of memory.
Abstract:
A data processing apparatus 2 includes error detection and correction circuitry 8 with an associated hard-error memory buffer 10. When a correctable hard-error is detected associated with a memory access to a memory 6, if the hard-error memory buffer 10 is already full, then this correctable hard-error is escalated to be handled as an uncorrectable hard-error. The escalated uncorrectable hard-error is then handled by uncorrectable error handling circuitry 14 (fatal error circuitry) which may trigger an abort of corresponding processing operations by a processor core 4 and force the relinquishing of resources within other circuit elements such as a store buffer 16.
Abstract:
An apparatus includes a processing pipeline comprising a plurality of stages, the plurality of stages including at least one instruction fusing stage to detect whether a block of instructions to be processed comprises a fusible group of instructions, and to generate a fused instruction to be processed by a subsequent stage of the processing pipeline when said block of instructions comprises said fusible group. However, when said block of instructions comprises a partial subset of said fusible group of instructions, the instruction fusing stage is configured to delay handling of said partial subset of said fusible group of instructions until the instruction fusing stage has determined whether at least one subsequent block of instructions to be processed comprises a remaining subset of instructions of said fusible group.
Abstract:
A data processing apparatus 2 includes a cache memory supporting parallel data loads involving both a first address and a second address. The first address is compared with TAG values stored within a first value store 10 and the second address is compared in parallel with TAG values stored within a second value store 14. The second value store 14 contains a proper subset of the data value stored within the first value store 10.
Abstract:
An integrated circuit 2 incorporates prefetch circuitry 12 for prefetching program instructions from a memory 6. The prefetch circuitry 12 includes a branch target address cache 28. The branch target address cache 28 stores data indicative of branch target addresses of previously encountered branch instructions fetched from the memory 6. For each previously encountered branch instructions, the branch target address cache stores a tag value indicative of a fetch address of that previously encountered branch instruction. The tag values stored are generated by tag value generating circuitry 32 which performs a hashing function upon a portion of the fetch address such that the tag value has a bit length less than the bit length of the portion of the fetch address concerned.