Abstract:
A video coder searches a set of neighbor blocks to generate a plurality of disparity vector candidates. Each of the neighbor blocks is a spatial or temporal neighbor of a current block. The video coder determines, based at least in part on the plurality of disparity vector candidates, a final disparity vector for the current block.
Abstract:
Techniques are described for encoding and decoding depth data for three-dimensional (3D) video data represented in a multiview plus depth format using depth coding modes that are different than high-efficiency video coding (HEVC) coding modes. Examples of additional depth intra coding modes available in a 3D-HEVC process include at least two of a Depth Modeling Mode (DMM), a Simplified Depth Coding (SDC) mode, and a Chain Coding Mode (CCM). In addition, an example of an additional depth inter coding mode includes an Inter SDC mode. In one example, the techniques include signaling depth intra coding modes used to code depth data for 3D video data in a depth modeling table that is separate from the HEVC syntax. In another example, the techniques of this disclosure include unifying signaling of residual information of depth data for 3D video data across two or more of the depth coding modes.
Abstract:
In one example, a device includes a video coder configured to determine a first co-located reference picture for generating a first temporal motion vector predictor candidate for predicting a motion vector of a current block, determine a second co-located reference picture for generating a second temporal motion vector predictor candidate for predicting the motion vector of the current block, determine a motion vector predictor candidate list that includes at least one of the first temporal motion vector predictor candidate and the second temporal motion vector predictor candidate, select a motion vector predictor from the motion vector predictor candidate list, and code the motion vector of the current block relative to the selected motion vector predictor.
Abstract:
When coding multiview video data, a video coder can code one or more pictures in one or more reference views, including a first reference view and determine a disparity vector for a current block based on motion information of one or more neighboring blocks of the current block, wherein the current block is in a second view, wherein the disparity vector points from the current block to a corresponding block in a picture of the same time instance in one of the one or more reference views.
Abstract:
A video encoder generates, based on a reference picture set of a current view component, a reference picture list for the current view component. The reference picture set includes an inter-view reference picture set. The video encoder encodes the current view component based at least in part on one or more reference pictures in the reference picture list. In addition, the video encoder generates a bitstream that includes syntax elements indicating the reference picture set of the current view component. A video decoder parses, from the bitstream, syntax elements indicating the reference picture set of the current view component. The video decoder generates, based on the reference picture set, the reference picture list for the current view component. In addition, the video decoder decodes at least a portion of the current view component based on one or more reference pictures in the reference picture list.
Abstract:
A first reference index value indicates a position, within a reference picture list associated with a current prediction unit (PU) of a current picture, of a first reference picture. A reference index of a co-located PU of a co-located picture indicates a position, within a reference picture list associated with the co-located PU of the co-located picture, of a second reference picture. When the first reference picture and the second reference picture belong to different reference picture types, a video coder sets a reference index of a temporal merging candidate to a second reference index value. The second reference index value is different than the first reference index value.
Abstract:
Techniques are described for palette-based video coding. In palette-based coding, a video coder may form a “palette” as a table of colors for representing video data of a particular area (e.g., a given block). Rather than coding actual pixel values (or their residuals), the video coder may code palette index values for one or more of the pixels that correspond to entries in the palette representing the colors of the pixels. A palette may be explicitly encoded, predicted from previous palette entries, or a combination thereof. In this disclosure, techniques are described for coding a block of video data that has a single color value using a single color mode as a sub-mode of a palette coding mode. The disclosed techniques enable a block having a single color value to be coded with a reduced number of bits compared to a normal mode of the palette coding mode.
Abstract:
In general, this disclosure describes techniques for coding video blocks using a color-space conversion process. A video coder, such as a video encoder or a video decoder, may determine a coding mode used to encode the video data. The coding mode may be one of a lossy coding mode or a lossless coding mode. The video coder may determine a color-space transform process dependent on the coding mode used to encode the video data. The video coder may apply the color-space transform process in encoding the video data. In decoding the video data, independent of whether the coding mode is the lossy coding mode or the lossless coding mode, the video coder may apply the same color-space inverse transform process in a decoding loop of the encoding process.
Abstract:
A device for coding video data, the device comprising a memory configured to store video data and a video coder comprising one or more processors configured to: determine a coding unit of a picture of the video data is coded using an intra block copy mode; determine a vector for a first chroma block of the coding unit; locate a first chroma reference block using the vector, wherein the first chroma reference block is in the picture; predict the first chroma block based on the first chroma reference block; locate a second chroma reference block using the vector, wherein the second chroma reference block is in the picture; and predict a second chroma block of the coding unit based on the second chroma reference block.
Abstract:
Techniques are described for sub-prediction unit (PU) based motion prediction for video coding in HEVC and 3D-HEVC. In one example, the techniques include an advanced temporal motion vector prediction (TMVP) mode to predict sub-PUs of a PU in single layer coding for which motion vector refinement may be allowed. The advanced TMVP mode includes determining motion vectors for the PU in at least two stages to derive motion information for the PU that includes different motion vectors and reference indices for each of the sub-PUs of the PU. In another example, the techniques include storing separate motion information derived for each sub-PU of a current PU predicted using a sub-PU backward view synthesis prediction (BVSP) mode even after motion compensation is performed. The additional motion information stored for the current PU may be used to predict subsequent PUs for which the current PU is a neighboring block.