Abstract:
A system and method for encoding and decoding video data. A predicted residual signal of a target color component is determined as a function of one or more linear parameters of a linear model and of a residual signal of a source color component. A residual signal of the target color component is determined as a function of a remaining residual signal of the target color component and of the predicted residual signal of the target color component.
Abstract:
In one example, a video coder, such as a video encoder or a video decoder, is configured to code a value for a layer identifier in a slice header for a current slice in a current layer of multi-layer video data, and, when the value for the layer identifier is not equal to zero, code a first set of syntax elements in accordance with a base video coding standard, and code a second set of one or more syntax elements in accordance with an extension to the base video coding standard. The second set of syntax elements may include a syntax element representative of a position for an identifier of an inter-layer reference picture of a reference layer in a reference picture list, and the video coder may construct the reference picture list such that the identifier of the inter-layer reference picture is located in the determined position.
Abstract:
A video coder determines a candidate for inclusion in a candidate list for a current prediction unit (PU). The candidate is based on motion parameters of a plurality of sub-PUs of the current PU. If a reference block corresponding to a sub-PU is not coded using motion compensated prediction, the video coder sets the motion parameters of the sub-PU to default motion parameters. For each respective sub-PU from the plurality of sub-PUs, if a reference block for the respective sub-PU is not coded using motion compensated prediction, the motion parameters of the respective sub-PU are not set in response to a subsequent determination that a reference block for any later sub-PU in an order is coded using motion compensated prediction.
Abstract:
A video coder determines a first picture order count (POC) value of a first reference picture associated with a first motion vector of a corresponding block that points in a first direction and determines whether a first reference picture list for the current block includes a reference picture having the first POC value; in response to the reference picture list not including the reference picture having the first POC value, determines a second POC value of a second reference picture associated with a second motion vector of the corresponding block that points in a second direction, determines whether the first reference picture list for the current block includes a reference picture having the second POC value and in response to the first reference picture list including the reference picture having the second POC value, decodes the current motion vector using the second motion vector of the corresponding block.
Abstract:
Techniques for advanced residual prediction (ARP) for coding video data may include inter-view ARP. Inter-view ARP may include identifying a disparity motion vector (DMV) for a current video block. The DMV is used for inter-view prediction of the current video block based on an inter-view reference video block. The techniques for inter-view ARP may also include identifying temporal reference video blocks in the current and reference views based on a temporal motion vector (TMV) of the inter-view reference video block, and determining a residual predictor block based on a difference between the temporal reference video blocks.
Abstract:
In an example, a process for coding video data includes determining a partitioning pattern for a block of depth values comprising assigning one or more samples of the block to a first partition and assigning one or more other samples of the block to a second partition. The process also includes determining a predicted value for at least one of the first partition and the second partition based on the determined partition pattern. The process also includes coding the at least one of the first partition and the second partition based on the predicted value.
Abstract:
A video coder is configured to apply a separable bilinear interpolation filter when determining reference blocks as part of advanced residual prediction. Particularly, the video coder may determine, based on a motion vector of a current block in a current picture of video data, a location of a first reference block in a first reference picture. The video coder may also determine a location of a second reference block in a second reference picture. The video coder may apply a separable bilinear interpolation filter to samples of the second reference picture to determine samples of the second reference block. The video coder may apply the separable bilinear interpolation filter to samples of a third reference picture to determine samples of a third reference block. Each respective sample of a predictive block may be equal to a respective sample of the first reference block plus a respective residual predictor sample.
Abstract:
A device for decoding video data includes a memory configured to store video data and one or more processors configured to: receive a first block of the video data; determine a quantization parameter for the first block; in response to determining that the first block is coded using a color-space transform mode for residual data of the first block, modify the quantization parameter for the first block; perform a dequantization process for the first block based on the modified quantization parameter for the first block; receive a second block of the video data; receive a difference value indicating a difference between a quantization parameter for the second block and the quantization parameter for the first block; determine the quantization parameter for the second block based on the received difference value and the quantization parameter for the first block; and decode the second block based on the determined quantization parameter.
Abstract:
A video encoder generates, based on a reference picture set of a current view component, a reference picture list for the current view component. The reference picture set includes an inter-view reference picture set. The video encoder encodes the current view component based at least in part on one or more reference pictures in the reference picture list. In addition, the video encoder generates a bitstream that includes syntax elements indicating the reference picture set of the current view component. A video decoder parses, from the bitstream, syntax elements indicating the reference picture set of the current view component. The video decoder generates, based on the reference picture set, the reference picture list for the current view component. In addition, the video decoder decodes at least a portion of the current view component based on one or more reference pictures in the reference picture list.
Abstract:
A video coder decodes a coding unit (CU) of video data. In decoding the video data, the video coder determines that the CU was encoded using the color space conversion. The video coder determines the initial quantization parameter (QP), determines the final QP that is equal to a sum of the initial QP and a QP offset, and inverse quantizes, based on the final QP, a coefficient block, then reconstructs the CU based on the inverse quantized coefficient blocks.