Abstract:
Techniques and tools are described for compensating for rounding when estimating sample-domain distortion in the transform domain. For example, a video encoder estimates pixel-domain distortion in the transform domain for a block of transform coefficients after compensating for rounding in the DC coefficient of the block. In this way, the video encoder improves the accuracy of pixel-domain distortion estimation but retains the computational advantages of performing the estimation in the transform domain. Rounding compensation includes, for example, looking up an index (from a de-quantized transform coefficient) in a rounding offset table to determine a rounding offset, then adjusting the coefficient by the offset. Other techniques and tools described herein are directed to creating rounding offset tables and encoders that make encoding decisions after considering rounding effects that occur after an inverse frequency transform on de-quantized transform coefficient values.
Abstract:
Techniques and tools for switching distortion metrics during motion estimation are described. For example, a video encoder determines a distortion metric selection criterion for motion estimation. The criterion can be based on initial results of the motion estimation. To evaluate the criterion, the encoder can compare the criterion to a threshold that depends on a current quantization parameter. The encoder selects between multiple available distortion metrics, which can include a sample-domain distortion metric (e.g., SAD) and a transform-domain distortion metric (e.g., SAHD). The encoder uses the selected distortion metric in the motion estimation. Selectively switching between SAD and SAHD provides rate-distortion performance superior to using only SAD or only SAHD. Moreover, due to the lower complexity of SAD, the computational complexity of motion estimation with SAD-SAHD switching is typically less than motion estimation that always uses SAHD.
Abstract:
Techniques and tools are described for compensating for rounding when estimating sample-domain distortion in the transform domain. For example, a video encoder estimates pixel-domain distortion in the transform domain for a block of transform coefficients after compensating for rounding in the DC coefficient of the block. In this way, the video encoder improves the accuracy of pixel-domain distortion estimation but retains the computational advantages of performing the estimation in the transform domain. Rounding compensation includes, for example, looking up an index (from a de-quantized transform coefficient) in a rounding offset table to determine a rounding offset, then adjusting the coefficient by the offset. Other techniques and tools described herein are directed to creating rounding offset tables and encoders that make encoding decisions after considering rounding effects that occur after an inverse frequency transform on de-quantized transform coefficient values.
Abstract:
Techniques and tools for conversion operations between modules in a scalable video encoding tool or scalable video decoding tool are described. For example, given reconstructed base layer video in a low resolution format (e.g., 4:2:0 video with 8 bits per sample) an encoding tool and decoding tool adaptively filter the reconstructed base layer video and upsample its sample values to a higher sample depth (e.g., 10 bits per sample). The tools also adaptively scale chroma samples to a higher chroma sampling rate (e.g., 4:2:2). The adaptive filtering and chroma scaling help reduce energy in inter-layer residual video by making the reconstructed base layer video closer to input video, which typically makes compression of the inter-layer residual video more efficient. The encoding tool also remaps sample values of the inter-layer residual video to adjust dynamic range before encoding, and the decoding tool performs inverse remapping after decoding.
Abstract:
Architecture for enhancing the compression (e.g., luma, chroma) of a video signal and improving the perceptual quality of the video compression schemes. The architecture operates to reshape the normal multimodal energy distribution of the input video signal to a new energy distribution. In the context of luma, the algorithm maps the black and white (or contrast) information of a picture to a new energy distribution. For example, the contrast can be enhanced in the middle range of the luma spectrum, thereby improving the contrast between a light foreground object and a dark background. At the same time, the algorithm reduces the bit-rate requirements at a particular quantization step size. The algorithm can be utilized also in post-processing to improve the quality of decoded video.
Abstract:
Techniques and tools for switching distortion metrics during motion estimation are described. For example, a video encoder determines a distortion metric selection criterion for motion estimation. The criterion can be based on initial results of the motion estimation. To evaluate the criterion, the encoder can compare the criterion to a threshold that depends on a current quantization parameter. The encoder selects between multiple available distortion metrics, which can include a sample-domain distortion metric (e.g., SAD) and a transform-domain distortion metric (e.g., SAHD). The encoder uses the selected distortion metric in the motion estimation. Selectively switching between SAD and SAHD provides rate-distortion performance superior to using only SAD or only SAHD. Moreover, due to the lower complexity of SAD, the computational complexity of motion estimation with SAD-SAHD switching is typically less than motion estimation that always uses SAHD.
Abstract:
A method is described for efficiently determining total end-to-end distortion of a pre-compressed data stream, such as video streams or other media streams, at the time of delivery over a lossy-network, and for providing adaptive error-resilient delivery schemes based on distortion estimates. The methods can be utilized with single or multilayer packet streams and are particularly well suited for video streams. By way of example, distortion estimates are performed by generating side-information at the time of data stream compression, wherein the side-information is used in conjunction with information about the network status to determine an estimated distortion for the group of packets when the data stream is transported over the network to a destination end. This estimation may be utilized within described resiliency techniques in which the error correction mechanism is selected in response to the estimated distortion, which may be additionally refined in reference to cost factors.
Abstract:
Various new and non-obvious apparatus and methods for using frame caching to improve packet loss recovery are disclosed. One of the disclosed embodiments is a method for using periodical and synchronized frame caching within an encoder and its corresponding decoder. When the decoder discovers packet loss, it informs the encoder which then generates a frame based on one of the shared frames stored at both the encoder and the decoder. When the decoder receives this generated frame it can decode it using its locally cached frame.
Abstract:
Techniques and tools for encoding and decoding a block of frequency coefficients are presented. An encoder selects a scan order from multiple available scan orders and then applies the selected scan order to a two-dimensional matrix of transform coefficients, grouping non-zero values of the frequency coefficients together in a one-dimensional string. The encoder entropy encodes the one-dimensional string of coefficient values according to a multi-level nested set representation. In decoding, a decoder entropy decodes the one-dimensional string of coefficient values from the multi-level nested set representation. The decoder selects the scan order from among multiple available scan orders and then reorders the coefficients back into a two-dimensional matrix using the selected scan order.
Abstract:
A video codec efficiently signals that a frame is identical to its reference frame, such that separate coding of its picture content is skipped. Information that a frame is skipped is represented jointly in a coding table of a frame coding type element for bit rate efficiency in signaling. Further, the video codec signals the picture type (e.g., progressive or interlaced) of skipped frames, which permits different repeat padding methods to be applied according to the picture type.