摘要:
An integrated circuit (2) is provided with error detection circuitry (10,12) and error repair circuitry (14). Error tolerance circuitry (16) is responsive to a control parameter to selectively disable the error repair circuitry (14). The control parameter is dependent on the processing performed within the circuit. For example, the control parameter may be generated in dependence upon the program instruction being executed, the output signal value which is in error, the previous behaviour of the circuit or in other ways.
摘要:
A processor 2 is responsive to a stream of program instructions to issue program instructions under control of scheduling circuitry 6 to respective execution units 24 for execution. The execution units 24 can include error detecting circuitry 32 for detecting a change in an output signal which occurs after the output signal has latched and during an error detecting period following the latching of the output signal. The scheduling circuitry 6 is arranged so as to suppress issue of program instructions to an execution unit 24 having such error detecting circuitry 32 on consecutive processing cycles.
摘要:
An integrated circuit is provided with error detection circuitry and error repair circuitry. Error tolerance circuitry is responsive to a control parameter to selectively disable the error repair circuitry. The control parameter is dependent on the processing performed within the circuit. For example, the control parameter may be generated in dependence upon the program instruction being executed, the output signal value which is in error, the previous behavior of the circuit or in other ways.
摘要:
An instruction processing pipeline 6 is provided. This has error detection and error recovery circuitry 20 associated with one or more of the pipeline stages. If an error is detected within a signal value within that pipeline stage, then it can be repaired. Part of the error recovery may be to flush upstream program instructions from the instruction pipeline 6. When multi-threading, only those instructions from a thread including an instruction which has been lost as a consequence of the error recovery need to be flushed from the instruction pipeline 6. Instruction can also be selected for flushing in dependence upon characteristics such as privileged level, number of dependent instructions etc. The instruction pipeline may additionally/alternatively be provided with more than one main storage element 26, 28 associated with each signal value with these main storage elements 26, 28 being used in an alternating fashion such that if a signal value has been erroneously captured and needs to be repaired, there is still available a main storage element 26, 28 to properly capture the signal value corresponding to the following program instruction. In this way flushes can be avoided.
摘要:
A processor 2 is responsive to a stream of program instructions to issue program instructions under control of scheduling circuitry 6 to respective execution units 24 for execution. The execution units 24 can include error detecting circuitry 32 for detecting a change in an output signal which occurs after the output signal has latched and during an error detecting period following the latching of the output signal. The scheduling circuitry 6 is arranged so as to suppress issue of program instructions to an execution unit 24 having such error detecting circuitry 32 on consecutive processing cycles.
摘要:
An instruction processing pipeline 6 is provided. This has error detection and error recovery circuitry 20 associated with one or more of the pipeline stages. If an error is detected within a signal value within that pipeline stage, then it can be repaired. Part of the error recovery may be to flush upstream program instructions from the instruction pipeline 6. When multi-threading, only those instructions from a thread including an instruction which has been lost as a consequence of the error recovery need to be flushed from the instruction pipeline 6. Instruction can also be selected for flushing in dependence upon characteristics such as privileged level, number of dependent instructions etc. The instruction pipeline may additionally/alternatively be provided with more than one main storage element 26, 28 associated with each signal value with these main storage elements 26, 28 being used in an alternating fashion such that if a signal value has been erroneously captured and needs to be repaired, there is still available a main storage element 26, 28 to properly capture the signal value corresponding to the following program instruction. In this way flushes can be avoided.
摘要:
An integrated circuit 2 is provided with a plurality of pipeline stages 10. These pipeline stages 10 have speculative processing control circuitry 12 which permits speculative processing in downstream pipeline stages and triggers a first error recovery operation (partial pipeline flushing) if such speculative processing is determined to be based upon an error. The pipeline stage 10 further includes speculative error detecting circuitry 14 which generates a prediction nc regarding whether or not the processing circuitry 18 will produce an error. This prediction is used to trigger a second error recovery operation (partial pipeline stall). This second error recovery operation has a lower performance penalty than the first error recovery operation.
摘要:
A data processing apparatus is provided comprising processing circuitry for executing multiple program threads. At least one storage unit is shared between the multiple program threads and comprises multiple entries, each entry for storing a storage item either associated with a high priority program thread or a lower priority program thread. A history storage for retaining a history field for each of a plurality of blocks of the storage unit is also provided. On detection of a high priority storage item being evicted from the storage unit as a result of allocation to that entry of a lower priority storage item, the history field for the block containing that entry is populated with an indication of the evicted high priority storage item. When later a high priority storage item is allocated to a selected entry of the storage unit, a comparison operation between the allocated high priority storage item and the indication in the history field for the block containing the selected entry is carried out, and on detection of a match condition a lock indication associated with that entry is set to inhibit further eviction of that high priority storage item.
摘要:
A data processing apparatus and method are provided for managing cache coherency. The data processing apparatus comprises a plurality of processing units, each having a cache associated therewith, and each cache having indication circuitry containing segment filtering data. The indication circuitry is responsive to an address portion of an address specified by an access request from an associated processing unit to reference the segment filtering data in order to provide, for each of at least a subset of the segments of the associated cache, an indication as to whether the data is either definitely not stored in that segment or is potentially stored in that segment. Further, in accordance with the present invention, cache coherency circuitry is provided which employs a cache coherency protocol to ensure data accessed by each processing unit is up-to-date. The cache coherency circuitry has snoop indication circuitry associated therewith whose content is derived from the segment filtering data of each indication circuitry. For certain access requests, the cache coherency circuitry initiates a coherency operation during which the snoop indication circuitry is referenced to determine whether any of the caches require subjecting to a snoop operation. For each cache for which it is determined a snoop operation should be performed, the cache coherency circuitry is arranged to issue a notification to that cache identifying the snoop operation to be performed. By taking advantage of information already provided in association with each cache in order to form the content of the snoop indication circuitry, significant hardware cost savings are achieved when compared with prior art techniques. Further, through use of such an approach, it is possible in embodiments of the present invention to identify the snoop operation not only on a cache-by-cache basis, but also for a particular cache to identify which segments of that cache should be subjected to the snoop operation.
摘要:
A hardware transactional memory 12, 14, 16, 18, 20 is provided within a multiprocessor 4, 6, 8, 10 system with coherency control and hardware transaction memory control circuitry 22 that serves to at least partially manage the scheduling of processing transactions in dependence upon conflict data 26, 28, 30. The conflict data characterises previously encountered conflicts between processing transactions. The scheduling is performed such that a candidate processing transaction will not be scheduled if the conflict data indicates that one of the already running processing transactions has previously conflicted with the candidate processing transaction.