摘要:
A fully mirrored memory system includes at least one split memory bus, with each portion of the split memory bus having active memory and mirror memory. Each portion of the memory bus transfers a portion of the data for a memory transaction. If a memory unit is determined to be defective, one portion of the memory bus may be inactivated for hot swapping of memory, and the system can continue to operate using an active portion of the memory bus.
摘要:
A method and apparatus for managing texture mapping data in a computer graphics system, the system including a host computer, primitive rendering hardware and a textured primitive data path extending between the host computer and the primitive rendering hardware. The host computer passes textured primitives to be rendered by the system using corresponding texture mapping data to the primitive rendering hardware over the textured primitive data path. The host computer has a main memory that stores texture mapping data corresponding to the textured primitives to be rendered. The primitive rendering hardware includes a local texture memory that locally stores the texture mapping data corresponding to at least one of the textured primitives to be rendered. When texture mapping data corresponding to one of the textured primitives to be rendered is stored in the host computer main memory but not within the local texture mapping memory, the texture mapping data corresponding to the one of the textured primitives is downloaded from the host computer main memory to the local texture mapping memory through a texture mapping data path that is separate from the textured primitive data path.
摘要:
Graphics window systems which utilize graphics pipelines and graphics pipeline bypass buses. Hardware solutions for window relative rendering of graphics primitives, block moving of graphics primitives, transfer of large data blocks, and elimination of pipeline flushing are disclosed. The hardware implementations provided in accordance with the invention are interfaced along the pipeline bypass bus, thereby eliminating gross overhead processor time for the graphics pipeline and reducing pipeline latency. Methods and apparatus provided in accordance with the invention exhibit significant pipeline efficiency and reductions in time to render graphics primitives to the screen system.
摘要:
The present invention is broadly directed to a system of components defining a plurality of nodes and a random access memory (RAM) connected to each node. The system comprises at least one producer functional unit configured to perform a predetermined processing function resulting in the creation of at least one producer message, a communication mechanism configured to manage and control communication of messages with other nodes, at least one pointer that is configurable to point to a storage location within the RAM, and a message logic configured to interpret content of the at least one producer message, the message logic further configured to associate the producer message with a subset of the at least one pointers based upon the content of the at least one producer message, the message logic further configured to store the at least one producer message within the RAM at the locations indicated by the associated subset of at least one pointer.
摘要:
A method and apparatus for managing blocks of data in a data processing system, the data processing system including a host computer and data processing hardware, the host computer having a main memory that stores blocks of data to be processed by the data processing hardware, the data processing hardware including a local memory that locally stores a subset of the blocks of data to be processed by the data processing hardware. When a portion of one of the blocks of data is to be processed by the data processing hardware, a determination is made as to whether the block of data is in the local memory. When the block of data is in the local memory, the portion of the block of data to be processed is read from the local memory. When the block of data is not in the local memory, it is downloaded from the host computer main memory to the data processing hardware. The data processing hardware may generate an interrupt to the host computer with a request to download data.
摘要:
A texture mapping computer graphics system includes a host computer with a main memory that stores texture data including a plurality of texels. A local memory stores at least a portion of the texture data. A local memory access unit, coupled to the local memory, accesses texels from the local memory. A texel data buffer, coupled to the local memory access unit, stores a limited number of texels most recently accessed by the access unit from the local memory. A texel interpolator, coupled to the texel data buffer, reads texels from predefined locations of the texel data buffer. The access unit accesses texels from the local memory only when such texels are unavailable to be re-read from the texel data buffer. A circuit, coupled to the interpolator and the local memory access unit, determines whether a current texel is available to be re-read from the texel data buffer. The circuit includes a comparator that compares an address of the current texel with an address of a texel most recently accessed from the local memory.
摘要:
Methods and apparatus for maximizing column address coherency for serial and parallel port accesses to a dual port frame buffer. Performance of the serial port of the frame buffer is greatly improved by separating the page boundaries in the horizontal direction (i.e., scan line organized), while performance of the parallel port of the frame buffer is enhanced by organizing the page boundaries for rectangular areas of the display. Performance at both ports may be maximized at the same time by organizing the video random access memory (VRAM) into tiles and vertically barrel shifting the scan line data at a fixed interval across the video display. During operation, the serial port output looks like an entire row of data while it has actually output parts of N rows of data from two separate rows of memory chips which are changed at the fixed interval. This approach allows the parallel port to organize columns N times higher in the vertical direction. As a result, the page boundaries are N times as far apart in the vertical direction, thereby improving output performance.
摘要:
A system for pushing data, the system includes a source node that stores a coherent copy of a block of data. The system also includes a push engine configured to determine a next consumer of the block of data. The determination being made in the absence of the push engine detecting a request for the block of data from the next consumer. The push engine causes the source node to push the block of data to a memory associated with the next consumer to reduce latency of the next consumer accessing the block of data.
摘要:
A system and method communicate information from a single-threaded application over multiple I/O busses to a computing subsystem for processing. In accordance with one embodiment, a method is provided that partitions state-sequenced information for communication to a computer subsystem, communicates the partitioned information to the subsystem over a plurality of input/output busses, and separately processes the information received over each of the plurality of input/output busses, without first se-sequencing the information.
摘要:
A system for pushing data, the system includes a source node that stores a coherent copy of a block of data. The system also includes a push engine configured to determine a next consumer of the block of data. The determination being made in the absence oft he push engine detecting a request for the block of data from the next consumer. The push engine causes the source node to push the block of data to a memory associated with the next consumer to reduce latency of the next consumer accessing the block of data.