摘要:
A method of organizing memory for storage of texture data, in accordance with one embodiment of the invention, includes accessing a size of a mipmap level of a texture map. A block dimension may be determined based on the size the mipmap level. A memory space (e.g., computer-readable medium) may be logically divided into a plurality of whole number of blocks of variable dimension. The dimension of the blocks is measured in units of gobs and each gob is of a fixed dimension of bytes. A mipmap level of a texture map may be stored in the memory space. A texel coordinate of said mipmap level may be converted into a byte address of the memory space by determining a gob address of a gob in which the texel coordinate resides and determining a byte address within the particular gob.
摘要:
A new, useful, and non-obvious shader processor architecture having a shader register file that acts both as an internal storage register file for temporarily storing data within the shader processor and as a First-In First-Out (FIFO) buffer for a subsequent module. Some embodiments include automatic, programmable hardware conversion between numeric formats, for example, between floating point data and fixed point data.
摘要:
A method of organizing memory for storage of texture data, in accordance with one embodiment of the invention, includes accessing a size of a mipmap level of a texture map. A block dimension may be determined based on the size of the mipmap level. A memory space (e.g., computer-readable medium) may be logically divided into a plurality of whole number of blocks of variable dimension. The dimension of the blocks is measured in units of gobs and each gob is of a fixed dimension of bytes. A mipmap level of a texture map may be stored in the memory space. A texel coordinate of said mipmap level may be converted into a byte address of the memory space by determining a gob address of a gob in which the texel coordinate resides and determining a byte address within the particular gob.
摘要:
A work distribution unit distributes work batches to general processing clusters (GPCs) based on the number of streaming multiprocessors included in each GPC. Advantageously, each GPC receives an amount of work that is proportional to the amount of processing power afforded by the GPC. Embodiments include a method for distributing batches of processing tasks to two or more general processing clusters (GPCs), including the steps of updating a counter value for each of the two or more GPCs based on the number of enabled parallel processing units within each of the two or more GPCs, and distributing a batch of processing tasks to a first GPC of the two or more GPCs based on a counter value associated with the first GPC and based on a load signal received from the first GPC.
摘要:
A technique for controlling the distribution of compute task processing in a multi-threaded system encodes each processing task as task metadata (TMD) stored in memory. The TMD includes work distribution parameters specifying how the processing task should be distributed for processing. Scheduling circuitry selects a task for execution when entries of a work queue for the task have been written. The work distribution parameters may define a number of work queue entries needed before a cooperative thread array” (“CTA”) may be launched to process the work queue entries according to the compute task. The work distribution parameters may define a number of CTAs that are launched to process the same work queue entries. Finally, the work distribution parameters may define a step size that is used to update pointers to the work queue entries.
摘要:
One embodiment of the present invention sets forth a technique for managing the allocation and release of resources during multi-threaded program execution. Programmable reference counters are initialized to values that limit the amount of resources for allocation to tasks that share the same reference counter. Resource parameters are specified for each task to define the amount of resources allocated for consumption by each array of execution threads that is launched to execute the task. The resource parameters also specify the behavior of the array for acquiring and releasing resources. Finally, during execution of each thread in the array, an exit instruction may be configured to override the release of the resources that were allocated to the array. The resources may then be retained for use by a child task that is generated during execution of a thread.
摘要:
Architecture for compact multi-ported register file is disclosed. In an embodiment, a register file comprises a single-port random access memory (RAM). The single-port RAM comprises a single port for read operations and for write operations. Either a single read or a single write operation is performed for a given clock via the single port. Moreover, the single-port RAM serially performs N read operations and M write operations associated with a data group using a clock phase of (N+M) clock phases generated from a clock. In another embodiment, a semiconductor device includes the architecture for compact multi-ported register file. The semiconductor device comprises a plurality of register files. Each register file comprises a RAM comprising a port for read operations and for write operations. Moreover, each RAM serially performs N read operations and M write operations associated with one of a plurality of data groups using a corresponding clock phase of (N+M) clock phases generated from a clock. Further, the semiconductor device comprises an input staging unit for staging write data of one or more of the write operations. Continuing, the semiconductor device comprises an output staging unit for staging read data of one or more of the read operations. The semiconductor device can be a graphics processing unit (GPU).
摘要:
One embodiment of the present invention sets forth a technique for selecting a first processor included in a plurality of processors to receive work related to a compute task. The technique involves analyzing state data of each processor in the plurality of processors to identify one or more processors that have already been assigned one compute task and are eligible to receive work related to the one compute task, receiving, from each of the one or more processors identified as eligible, an availability value that indicates the capacity of the processor to receive new work, selecting a first processor to receive work related to the one compute task based on the availability values received from the one or more processors, and issuing, to the first processor via a cooperative thread array (CTA), the work related to the one compute task.
摘要:
A pixel center position that is not covered by a primitive covering a portion of the pixel is displaced to lie within a fragment formed by the intersection of the primitive and the pixel. X,y coordinates of a pixel center are adjusted to displace the pixel center position to lie within the fragment, affecting actual texture map coordinates or barycentric weights. Alternatively, a centroid sub-pixel sample position is determined based on coverage data for the pixel and a multisample mode. The centroid sub-pixel sample position is used to compute pixel or sub-pixel parameters for the fragment.
摘要:
A method and apparatus for controlling the flow of information (e.g., graphics primitives, display data, etc.) to an input/output unit within a computer controlled graphics system. The system includes a processor having a first-in-first-out (FIFO) buffer, a separate input/output unit with its FIFO buffer, and a number of intermediate devices (with FIFO buffers) coupled between the input/output unit and the processor for moving input/output data from the processor to the input/output unit. Mechanisms are placed within an intermediate device, very close to the processor, which maintain an accounting of the number of input/output data sent to the input/output unit, but not yet cleared from the input/output unit's buffer. These mechanisms regulate data flow to the input/output unit. By placing these mechanisms close to the processor, rather than within the input/output unit, the system allows a larger portion of the input/output unit's buffer to be utilized for storing input/output data before a processor suspend or interrupt is required. This leads to increased input/output data throughput between the processor and the input/output unit by reducing processor interrupts. The system also includes an efficiently invoked timer mechanism for temporarily suspending the processor from transmitting stores to the input/output unit when the input/output unit and/or the intermediate devices are congested. The processor is not interrupted by an interrupt request until after the timer mechanism times out, allowing the system an opportunity to clear its congestion before a lengthily invoked interrupt is required.