Abstract:
An apparatus for coding video information according to certain aspects includes a memory unit and a processor in communication with the memory unit. The memory unit stores difference video information associated with a difference video layer of pixel information derived from a difference between an enhancement layer and a corresponding base layer of the video information. The processor determines a DC prediction value for a video unit associated with the difference video layer while refraining from using pixel information from a neighboring area of the video unit, wherein the DC prediction value is equal to zero or is offset by an offset value. The DC prediction value is a prediction value used in intra prediction based at least on an average of neighboring video units of the video unit. The processor further determines a value of the video unit based at least in part on the DC prediction value.
Abstract:
A video encoder generates a sequence of sample adaptive offset (SAO) syntax elements for a coding tree block. The SAO syntax elements include regular context-adaptive binary arithmetic coding (CABAC) coded bins for a color component and bypass-coded bins for the color component. None of the bypass-coded bins is between two of the regular CABAC-coded bins in the sequence. The video encoder uses regular CABAC to encode the regular CABAC-coded bins and uses bypass coding to encode the bypass-coded bins. The video encoder outputs the SAO syntax elements in a bitstream. A video decoder receives the bitstream, uses regular CABAC to decode the regular CABAC-coded bins, uses bypass coding to decode the bypass-coded bins, and modifies a reconstructed picture based on the SAO syntax elements.
Abstract:
Techniques are described for encoding and decoding digital video data using macroblocks that are larger than the macroblocks prescribed by conventional video encoding and decoding standards. For example, the techniques include encoding and decoding a video stream using macroblocks comprising greater than 16×16 pixels. In one example, an apparatus includes a video encoder configured to encode a coded unit comprising a plurality of video blocks, wherein at least one of the plurality of video blocks comprises a size of more than 16×16 pixels and to generate syntax information for the coded unit that includes a maximum size value, wherein the maximum size value indicates a size of a largest one of the plurality of video blocks in the coded unit. The syntax information may also include a minimum size value. In this manner, the encoder may indicate to a decoder the proper syntax decoder to apply to the coded unit.
Abstract:
The techniques of this disclosure are directed toward the use of modified quantization parameter (QP) values to calculate quantized and dequantized transform coefficients of a video block with uniform QP granularity. Conventionally, when a quantization matrix is used during quantization and dequantization of transform coefficients, the quantization matrix entries act as scale factors of a quantizer step-step corresponding to a base QP value, which results in non-uniform QP granularity. To provide uniform QP granularity across all quantization matrix entries, the techniques include calculating modified QP values for transform coefficients based on associated quantization matrix entries used as offsets to a base QP value. At a video decoder, the techniques include calculating dequantized transform coefficients from quantized transform coefficients based on the modified QP values. At a video encoder, the techniques include calculating quantized transform coefficients from transform coefficients based on the modified QP values.
Abstract:
Techniques are described for a video coder (e.g., video encoder or video decoder) that is configured to select a context pattern from a plurality of context patterns that are the same for a plurality of scan types. Techniques are also described for a video coder that is configured to select a context pattern that is stored as a one-dimensional context pattern and identifies contexts for two or more scan types.
Abstract:
Techniques are described herein for processing video data using enhanced interpolation filters for intra-prediction. For instance, a device can determine an intra-prediction mode for predicting a block of video data. The device can determine a type of smoothing filter to use for the block of video data, wherein the type of the smoothing filter is determined based at least in part on comparing at least one of a width of the block of video data and a height of the block of video data to a first threshold. The device can further perform intra-prediction for the block of video data using the determined type of smoothing filter and the intra-prediction mode.
Abstract:
Systems and methods of filtering video data using a plurality of filters are disclosed. In an embodiment, a method includes receiving and decoding a plurality of filters embedded in a video data bitstream at a video decoder. The method includes selecting, based on information included in the video data bitstream, a particular filter of the plurality of filters. The method further includes applying the particular filter to at least a portion of decoded video data of the video data bitstream to produce filtered decoded video data.
Abstract:
The present disclosure relates to methods and devices for data or graphics processing including an apparatus, e.g., a GPU. The apparatus may receive at least one bitstream including a plurality of bits, each of the bits corresponding to a position in the at least one bitstream, and each of the bits being associated with color data. The apparatus may also arrange an order of the plurality of bits in the at least one bitstream, such that at least one of the bits corresponds to an updated position in the at least one bitstream. Additionally, the apparatus may convert, upon arranging the order of the bits, the color data associated with each of the plurality of bits in the at least one bitstream. The apparatus may also compress, upon converting the color data associated with each of the bits, the plurality of bits in the at least one bitstream.
Abstract:
Systems and techniques are described herein for processing video data. For example, a process can include obtaining a video bitstream, the video bitstream including adaptive loop filter (ALF) data. The process can further include determining a value of an ALF chroma filter signal flag from the ALF data, the value of the ALF chroma filter signal flag indicating whether chroma ALF filter data is signaled in the video bitstream. The process can further include processing at least a portion of a slice of video data based on the value of the ALF chroma filter signal flag.
Abstract:
Provided are systems and methods for processing 360-degree video data. In various implementations, a spherical representation of a 360-degree video frame can be segmented into a top region, a bottom region, and a middle region. The middle region can be mapped into one or more rectangular areas of an output video frame. The top region can be mapped into a first rectangular area of the output video frame using a mapping that converts a square to a circle, such that pixels in the circular top region are expanded to fill the first rectangular region. The bottom region can be mapped into a second rectangular area of the output video frame such that pixels in the circular bottom region are expanded to fill the second rectangular region.