Abstract:
This disclosure describes tools capable of generating messages for use in deblocking filtering a video stream, the messages based on prediction parameters extracted from the video stream.
Abstract:
A forced lock-step operation between a CPU (software) and the hardware is eliminated by unburdening the CPU from monitoring the hardware until it is finished with its task. This is done by providing a data/control message queue into which the CPU writes combined data/control messages and places an End tag into the queue when finished. The hardware checks the content of the message queue and starts decoding the incoming data. The hardware processes the data read from the message queue and the processed data is then written back into the message queue for use by the software. The hardware raises an interrupt signal to the CPU when reaching the End tag. Speed differences between hardware and software can be compensated for by changing the depth of the queue.
Abstract:
In one embodiment the present invention includes an apparatus having a random access memory, a first interface, and a second interface. The first interface is coupled between the random access memory and a plurality of storage devices, and operates in a first in first out (FIFO) manner. The second interface is coupled between the random access memory and a processor, and operates in a random access manner. As a result, the processor is not required to be in the loop when data is being transferred between the random access memory and the storage devices.
Abstract:
In one embodiment the present invention includes a microprocessor that has a pipeline circuit, a branch circuit, and a control circuit. The pipeline circuit pipelines instructions for the microprocessor. The branch circuit is coupled to the pipeline circuit and operates to store branch information. The control circuit is coupled to the pipeline circuit and the branch circuit. The control circuit stores a first branch information from the pipeline circuit to the branch circuit when a first condition is met. The control circuit retrieves a second branch information from the branch stack circuit to the pipeline circuit when a second condition is met. In this manner, the need for dedicated pipeline flush circuitry is avoided.
Abstract:
A system including a binarization module, an encoding module, and a prediction module. The binarization module is configured to binarize a syntax element and to generate symbols. The encoding module is configured to encode the symbols using context-adaptive binary arithmetic coding (CABAC). The prediction module is configured to generate a prediction for a number of renormalizations to be performed to renormalize an interval range when encoding one of the symbols. The encoding module encodes a next symbol following the one of the symbols based on the prediction before renormalization of the interval range is actually completed.
Abstract:
Devices, methods, and other embodiments associated with processing rasterized data are described. In one embodiment, an apparatus includes translation logic for converting lines of rasterized pixel data of a compressed image to a plurality of two-dimensional data blocks. The lines of rasterized pixel data are stored in consecutive memory locations. Each data block is stored in a consecutive memory location. The apparatus includes decompression logic for at least partially decompressing the compressed image based, at least in part, on the two-dimensional data blocks.
Abstract:
A forced lock-step operation between a CPU (software) and the hardware is eliminated by unburdening the CPU from monitoring the hardware until it is finished with its task. This is done by providing a data/control message queue into which the CPU writes combined data/control messages and places an End tag into the queue when finished. The hardware checks the content of the message queue and starts decoding the incoming data. The hardware processes the data read from the message queue and the processed data is then written back into the message queue for use by the software. The hardware raises an interrupt signal to the CPU when reaching the End tag. Speed differences between hardware and software can be compensated for by changing the depth of the queue.
Abstract:
A forced lock-step operation between a CPU (software) and the hardware is eliminated by unburdening the CPU from monitoring the hardware until it is finished with its task. This is done by providing a data/control message queue into which the CPU writes combined data/control messages and places an End tag into the queue when finished. The hardware checks the content of the message queue and starts decoding the incoming data. The hardware processes the data read from the message queue and the processed data is then written back into the message queue for use by the software. The hardware raises an interrupt signal to the CPU when reaching the End tag. Speed differences between hardware and software can be compensated for by changing the depth of the queue.