Abstract:
Techniques are described for determining whether a block in a candidate reference picture is available. A video coder may determine a location of a co-located largest coding unit (CLCU) in the candidate reference picture, where the CLCU is co-located with a LCU in a current picture, and the LCU includes a current block that is to be inter-predicted. The video coder may determine whether a block in the candidate reference picture is available based on a location of the block in the candidate reference picture relative to the location of the CLCU. If the block in the candidate reference picture is unavailable, the video coder may derive a disparity vector for the current block from a block other than the block determined to be unavailable.
Abstract:
During a process to derive an inter-view predicted motion vector candidate (IPMVC) for an Advanced Motion Vector Prediction (AMVP) candidate list, a video coder determines, based on a disparity vector of a current prediction unit (PU), a reference PU for the current PU. Furthermore, when a first reference picture of the reference PU has the same picture order count (POC) value as a target reference picture of the current PU, the video coder determines an IPMVC based on a first motion vector of the reference PU. Otherwise, when a second reference picture of the reference PU has the same POC value as the target reference picture of the current PU, the video coder determines the IPMVC based on a second motion vector of the reference PU.
Abstract:
A device performs a disparity vector derivation process to determine a disparity vector for a current block. As part of performing the disparity vector derivation process, when either a first or a second spatial neighboring block has a disparity motion vector or an implicit disparity vector, the device converts the disparity motion vector or the implicit disparity vector to the disparity vector for the current block. The number of neighboring blocks that is checked in the disparity vector derivation process is reduced, potentially resulting in decreased complexity and memory bandwidth requirements.
Abstract:
In an example, a method of coding multi-layer video data includes determining, for a first block of video data at a first temporal location, whether one or more reference picture lists for coding the first block contain at least one reference picture at a second, different temporal location. The method also includes coding the first block of video data relative to at least one reference block of video data of a reference picture in the one or more reference picture lists, where coding includes disabling an inter-view residual prediction process when the one or more reference picture lists do not include at least one reference picture at the second temporal location.
Abstract:
Techniques are described where if an inter-view predicted motion vector candidate (IPMVC) and an inter-view disparity motion vector candidate (IDMVC) are derived based on a shifted disparity vector, where the amount by which the disparity vector is shifted for the IPMVC and IDMVC is different. The techniques also prioritize the inclusion of the IPMVC over the IDMVC in a candidate list, and prune the IPMVC and the IDMVC if there is a duplicated IPMVC or IDMVC in the candidate list.
Abstract:
In an example, a method of coding video data includes determining a partition mode for coding a block of video data, where the partition mode indicates a division of the block of video data for predictive coding. The method also includes determining whether to code a weighting factor for an inter-view residual prediction process based on the partition mode, where, when the weighting factor is not coded, the inter-view residual prediction process is not applied to predict a residual for the block. The method also includes coding the block of video data with the determined partition mode.
Abstract:
A method of coding video data includes deriving prediction weights for illumination compensation of luma samples of a video block partition once for the video block partition such that the video block partition has a common set of prediction weights for performing illumination compensation of the luma samples regardless of a transform size for the video block partition, calculating a predicted block for the video block partition using the prediction weights using illumination compensation, and coding the video block partition using the predicted block.
Abstract:
For a depth block in a depth view component, a video coder derives a motion information candidate that comprises motion information of a corresponding texture block in a decoded texture view component, adds the motion information candidate to a candidate list for use in a motion vector prediction operation, and codes the current block based on a candidate in the candidate list.
Abstract:
A video encoder generates a bitstream that includes a reference picture list modification (RPLM) command. The RPLM command belongs to a type of RPLM commands for inserting short-term reference pictures into reference picture lists. The RPLM command instructs a video decoder to insert a synthetic reference picture into the reference picture list. The video decoder decodes, based at least in part on syntax elements parsed from the bitstream, one or more view components and generates, based at least in part on the one or more view components, the synthetic reference picture. The video decoder modifies, in response to the RPLM commands, a reference picture list to include the synthetic reference picture. The video decoder may use one or more pictures in the reference picture list as reference pictures to perform inter prediction on one or more video blocks of a picture.
Abstract:
Techniques are described for using an inter-intra-prediction block. A video coder may generate a first prediction block according to an intra-prediction mode and generate a second prediction block according to an inter-prediction mode. The video coder may weighted combine, such as based on the intra-prediction mode, the two prediction blocks to generate an inter-intra-prediction block (e.g., final prediction block). In some examples, an inter-intra candidate is identified in a list of candidate motion vector predictors, and an inter-intra-prediction block is used based on identification of the inter-intra candidate in the list of candidate motion vector predictors.