Abstract:
Context adaptive binary arithmetic coding (CABAC) techniques are generally described. Aspects of the techniques are generally directed to inheritance-based context initialization. An example video coding device includes a memory configured to store video data, and one or more processors. The processor(s) are configured to initialize context information for a current slice of a current picture by inheriting context information of a previously-coded block of a previously-coded picture of the stored video data as initialized context information for the current slice of the current picture. The processor(s) are further configured to code data of the current slice using the initialized context information.
Abstract:
An example method of entropy coding video data includes determining a window size of a plurality of window sizes for a context of a plurality of contexts used in a context-adaptive coding process to entropy code a value for a syntax element of the video data; entropy coding, based on a probability state of the context, a bin of the value for the syntax element; updating a probability state of the context based on the window size and the coded bin. The example method also includes entropy coding a next bin with the same context based on the updated probability state of the context.
Abstract:
An example method of entropy coding video data includes obtaining a pre-defined initialization value for a context of a plurality of contexts used in a context-adaptive entropy coding process to entropy code a value for a syntax element in a slice of the video data, wherein the pre-defined initialization value is stored with N-bit precision; determining, using a look-up table and based on the pre-defined initialization value, an initial probability state of the context for the slice of the video data, wherein a number of possible probability states for the context is greater than two raised to the power of N; and entropy coding, based on the initial probability state of the context, a bin of the value for the syntax element.
Abstract:
A prediction unit (PU) of a coding unit (CU) is split into two or more sub-PUs including a first sub-PU and a second sub-PU. A first motion vector of a first type is obtained for the first sub-PU and a second motion vector of the first type is obtained for the second sub-PU. A third motion vector of a second type is obtained for the first sub-PU and a fourth motion vector of the second type is obtained for the second sub-PU, such that the second type is different than the first type. A first portion of the CU corresponding to the first sub-PU is coded according to advanced residual prediction (ARP) using the first and third motion vectors. A second portion of the CU corresponding to the second sub-PU is coded according to ARP using the second and fourth motion vectors.
Abstract:
Techniques are described for encoding and decoding depth data for three-dimensional (3D) video data represented in a multiview plus depth format using depth coding modes that are different than high-efficiency video coding (HEVC) coding modes. Examples of additional depth intra coding modes available in a 3D-HEVC process include at least two of a Depth Modeling Mode (DMM), a Simplified Depth Coding (SDC) mode, and a Chain Coding Mode (CCM). In addition, an example of an additional depth inter coding mode includes an Inter SDC mode. In one example, the techniques include signaling depth intra coding modes used to code depth data for 3D video data in a depth modeling table that is separate from the HEVC syntax. In another example, the techniques of this disclosure include unifying signaling of residual information of depth data for 3D video data across two or more of the depth coding modes.
Abstract:
A device performs a disparity vector derivation process to determine a disparity vector for a current block. As part of performing the disparity vector derivation process, when either a first or a second spatial neighboring block has a disparity motion vector or an implicit disparity vector, the device converts the disparity motion vector or the implicit disparity vector to the disparity vector for the current block. The number of neighboring blocks that is checked in the disparity vector derivation process is reduced, potentially resulting in decreased complexity and memory bandwidth requirements.
Abstract:
A device for decoding video data is configured to determine, based on a chroma sampling format for the video data, that adaptive color transform is enabled for one or more blocks of the video data; determine a quantization parameter for the one or more blocks based on determining that the adaptive color transform is enabled; and dequantize transform coefficients based on the determined quantization parameter. A device for decoding video data is configured to determine for one or more blocks of the video data that adaptive color transform is enabled; receive in a picture parameter set, one or more offset values in response to adaptive color transform being enabled; determine a quantization parameter for a first color component of a first color space based on a first of the one or more offset values; and dequantize transform coefficients based on the quantization parameter.
Abstract:
A device for decoding video data is configured to determine for one or more blocks of the video data that adaptive color transform is enabled; determine a quantization parameter for the one or more blocks; in response to a value of the quantization parameter being below a threshold, modify the quantization parameter to determine a modified quantization parameter; and dequantize transform coefficients based on the modified quantization parameter.
Abstract:
In an example, a process for coding video data includes coding, with a variable length code, a syntax element indicating depth modeling mode (DMM) information for coding a depth block of video data. The process also includes coding the depth block based on the DMM information.
Abstract:
A video encoder generates, based on a reference picture set of a current view component, a reference picture list for the current view component. The reference picture set includes an inter-view reference picture set. The video encoder encodes the current view component based at least in part on one or more reference pictures in the reference picture list. In addition, the video encoder generates a bitstream that includes syntax elements indicating the reference picture set of the current view component. A video decoder parses, from the bitstream, syntax elements indicating the reference picture set of the current view component. The video decoder generates, based on the reference picture set, the reference picture list for the current view component. In addition, the video decoder decodes at least a portion of the current view component based on one or more reference pictures in the reference picture list.