Abstract:
Innovations are provided for encoding and/or decoding video and/or image content using reduced size inverse transforms. For example, a reduced size inverse transform can be performed during encoding or decoding of video or image content using a subset of coefficients (e.g., primarily non-zero coefficients) of a given block. For example, a bounding area can be determined for a block that encompasses the non-zero coefficients of the block. Meta-data for the block can then be generated, including a shortcut code that indicates whether a reduced size inverse transform will be performed. The inverse transform can then be performed using a subset of coefficients for the block (e.g., identified by the bounding area) and the meta-data, which results in decreased utilization of computing resources. The subset of coefficients and the meta-data can be transferred to a graphics processing unit (GPU), which also results in savings in terms of data transfer.
Abstract:
In response to a scene change being detected in screen content, a rate controller instructs a video encoder to generate an intraframe compressed image. The rate controller computes a target size for compressed image data using a function based on a maximum compressed size for a single image, i.e., without buffers for additional image data. For a number of images processed after detection of the scene change, this target size is computed and used to control the video encoder. After this number of images is processed, the rate controller can resume to a prior mode of operation. Such rate control reduces latency in encoding and transmission of screen content, which improves user perception of responsiveness of a host computer, such as for interactive video applications.
Abstract:
Techniques are described for split processing of streaming segments in which processing operations are split between a source component and a decoder component. For example, the source component can perform operations for receiving a streaming segment, demultiplexing the streaming segment to separate a video content bit stream, scanning the video content bit stream to find a location at which decoding can begin (e.g., scanning up to a first decodable I-picture, for which header parameter sets are available for decoding), and send the video content bit stream to the decoder component beginning at the location (e.g., the first decodable I-picture). The decoder component can begin decoding at the identified location (e.g., the first decodable I-picture). The decoder component can also discard subsequent pictures that reference a reference picture not present in the video content bit stream (e.g., when decoding starts with a new streaming segment).
Abstract:
A facility for completing a set of operations is described. Under the control of an application, the facility registers the background task to perform the set of operations. In response to the registration of the background task, the facility repeatedly invokes the background task to perform the set of operations.
Abstract:
Innovations in encoding and decoding of video pictures in a high-resolution chroma sampling format (such as YUV 4:4:4) using a video encoder and decoder operating on coded pictures in a low-resolution chroma sampling format (such as YUV 4:2:0) are presented. For example, high chroma resolution details are selectively encoded on a region-by-region basis. Or, as another example, coded pictures that contain sample values for low chroma resolution versions of input pictures and coded pictures that contain sample values for high chroma resolution details of the input pictures are encoded as separate sub-sequences of a single sequence of coded pictures, which can facilitate effective motion compensation. In this way, available encoders and decoders operating on coded pictures in the low-resolution chroma sampling format can be effectively used to provide high chroma resolution details.
Abstract:
In one example, a quality management controller of a video processing system may optimize a video recovery action through the selective dropping of video frames. The video processing system may store a compressed video data set in memory. The video processing system may receive a recovery quality indication describing a recovery priority of a user. The video processing system may apply a quality management controller in a video pipeline to execute a video recovery action to retrieve an output data set from the compressed video data set using a video decoder. The quality management controller may select a recovery initiation frame from the compressed video data set to be an initial frame to decompress based upon the recovery quality indication.
Abstract:
Buffer optimization techniques are described herein in which a graphics processing system is configured to implement and select between a plurality of buffer schemes for processing of an encoded data stream in dependence upon formats used for decoding and rendering (e.g., video format, bit depth, resolution, content type, etc.) and device capabilities such as available memory and/or processing power. Processing of an encoded data stream for display and rendering via the graphics processing system then occurs using a selected one of the buffer schemes to define buffers employed for the decoding and rendering, including at least configuring the sizes of buffers. The plurality of schemes may include at least one buffer scheme for processing the encoded content when the input format and the output format are the same, and a different buffer scheme for processing the encoded content when the input format and the output format are different.
Abstract:
Multi-threaded implementations of deblock filtering improve encoding and/or decoding efficiency. For example, a video encoder or decoder partitions a video picture into multiple segments. The encoder/decoder selects between multiple different patterns for splitting operations of deblock filtering into multiple passes. The encoder/decoder organizes the deblock filtering as multiple tasks, where a given task includes the operations of one of the passes for one of the segments. The encoder/decoder then performs the tasks with multiple threads. The performance of the tasks is constrained by task dependencies which, in general, are based at least in part on which lines of the picture are in the respective segments and which deblock filtering operations are in the respective passes. The task dependencies can include a cross-pass, cross-segment dependency between a given pass of a given segment and an adjacent pass of an adjacent segment.