摘要:
In a processing pipeline having a plurality of units, an interface unit is provided between a first, upstream pipeline unit that needs to be drained prior to a context switch and a second, downstream pipeline unit that might halt prior to a context switch. The interface unit redirects data that are drained from the first pipeline unit and to be received by the second pipeline unit, to a buffer memory provided in the front end of the processing pipeline. The contents of the buffer memory are subsequently dumped into memory reserved for the context that is being stored. When the processing pipeline is restored with this context, the data that were dumped into memory are retrieved back into the buffer memory and provided to the interface unit. The interface unit receives these commands and directs them to the second pipeline unit.
摘要:
A halt sequencing protocol permits a context switch to occur in a processing pipeline even before all units of the processing pipeline are idle. The context switch method based on the halt sequencing protocol includes the steps of issuing a halt request signal to the units of a processing pipeline, monitoring the status of each of the units, and freezing the states of all of the units when they are either idle or halted. Then, the states of the units, which pertain to the thread that has been halted, are dumped into memory, and the units are restored with states corresponding to a different thread that is to be executed after the context switch.
摘要:
Processing units are configured to capture the unit state in unit level error status registers when a runtime error event is detected in order to facilitate debugging of runtime errors. The reporting of warnings may be disabled or enabled to selectively monitor each processing unit. Warnings for each processing unit are propagated to an exception register in a front end monitoring unit. The warnings are then aggregated and propagated to an interrupt register in a front end monitoring unit in order to selectively generate an interrupt and facilitate debugging. A debugging application may be used to query the interrupt, exception, and unit level error status registers to determine the cause of the error. A default error handling behavior that overrides error conditions may be used in conjunction with the hardware warning protocol to allow the processing units to continue operating and facilitate in the debug of runtime errors.
摘要:
A unit status reporting protocol may also be used for context switching, debugging, and removing deadlock conditions in a processing unit. A processing unit is in one of five states: empty, active, stalled, quiescent, and halted. The state that a processing unit is in is reported to a front end monitoring unit to enable the front end monitoring unit to determine when a context switch may be performed or when a deadlock condition exists. The front end monitoring unit can issue a halt command to perform a context switch or take action to remove a deadlock condition and allow processing to resume.
摘要:
One embodiment of the present invention sets forth a technique for coalescing memory barrier operations across multiple parallel threads. Memory barrier requests from a given parallel thread processing unit are coalesced to reduce the impact to the rest of the system. Additionally, memory barrier requests may specify a level of a set of threads with respect to which the memory transactions are committed. For example, a first type of memory barrier instruction may commit the memory transactions to a level of a set of cooperating threads that share an L1 (level one) cache. A second type of memory barrier instruction may commit the memory transactions to a level of a set of threads sharing a global memory. Finally, a third type of memory barrier instruction may commit the memory transactions to a system level of all threads sharing all system memories. The latency required to execute the memory barrier instruction varies based on the type of memory barrier instruction.
摘要:
A technique for performing stream output operations in a parallel processing system is disclosed. A stream synchronization unit is provided that enables the parallel processing unit to track batches of vertices being processed in a graphics processing pipeline. A plurality of stream output units is also provided, where each stream output unit writes vertex attribute data to one or more stream output buffers for a portion of the batches of vertices. A messaging protocol is implemented between the stream synchronization unit and the plurality of stream output units that ensures that each of the stream output units writes vertex attribute data for the particular batch of vertices distributed to that particular stream output unit in the same order in the stream output buffers as the order in which the batch of vertices was received from a device driver by the parallel processing unit.
摘要:
The grid walk sampling technique is an efficient sampling algorithm aimed at optimizing the cost of triangle rasterization for modern graphics workloads. Grid walk sampling is an iterative rasterization algorithm that intelligently tests the intersection of triangle edges with multi-cell grids, determining coverage for a grid cell while identifying other cells in the grid that are either fully covered or fully uncovered by the triangle. Grid walk sampling rasterizes triangles using fewer computations and simpler computations compared with conventional highly parallel rasterizers. Therefore, a rasterizer employing grid walk sampling may compute sample coverage of triangles more efficiently in terms of power and circuitry die area compared with conventional highly parallel rasterizers.
摘要:
A superscaler processor capable of executing multiple instructions concurrently. The processor includes a program counter which identifies instructions for execution by multiple execution units. Further included is a register file made up of multiple register window pointer selects one of the multiple register windows. In response to the value of the current window pointer, a return prediction table provides a speculative program counter value, indicative of a return address of an instruction for a subroutine, corresponding to the selected register window. A watchpoint register stores the speculative program counter value. A fetch program counter, in response to the speculative program counter value, stores the instructions for execution after they have been identified by the program counter.
摘要:
A system and method providing a programmable hardware device within a CPU. The programmable hardware device permits a plurality of instructions to be trapped before they are executed. The instructions that are to be trapped are programmable to provide flexibility during CPU debugging and to ensure that a variety of application programs can be properly executed by the CPU. The system must also provide a means for permitting a trapped instruction to be emulated and/or to be executed serially. Related Applications
摘要:
Time-out checkpoints are formed based on a predetermined time-out condition or interval since the last checkpoint was formed rather than forming a checkpoint to store current processor state based merely on decoded instruction attributes. Such time-out conditions may include the number of instructions issued or the number of clock cycles elapsed, for example. Time-out checkpointing limits the maximum number of instructions within a checkpoint boundary and bounds the time period for recovery from an exception condition. The processor can restore time-out based checkpointed state faster than an instruction decode based checkpoint technique in the event of an exception so long as the instruction window size is greater than the maximum number of instructions within a checkpoint boundary, and such method eliminates processor state restoration dependency on instruction window size. Time-out checkpoints may be implemented with conventional checkpoints, or in a novel logical and physical register rename map checkpointing technique. Timeout checkpoint formation may be used with conventional processor backup techniques as well as with a novel backtracking technique including processor backup and backstepping.