Abstract:
A device for encoding video data includes a memory configured to store video data and a video encoder comprising one or more processors configured to, for a current layer being encoded, determine that the current layer has no direct reference layers, based on determining that the current layer has no direct reference layers, set at least one of a first syntax element, a second syntax element, a third syntax element, or a fourth syntax element to a disabling value indicating that a coding tool corresponding to the syntax element is disabled for the current layer.
Abstract:
Techniques are described determining a partition pattern for intra-prediction encoding or decoding a depth block from a partition pattern of one or more partition patterns associated with smaller sized blocks. A video encoder may intra-prediction encode the depth block based on the determined partition pattern, and a video decoder may intra-prediction decode the depth block based on the determine partition pattern.
Abstract:
A video coder generates a list of merging candidates for coding a video block of the 3D video. A maximum number of merging candidates in the list of merging candidates may be equal to 6. As part of generating the list of merging candidates, the video coder determines whether a number of merging candidates in the list of merging candidates is less than 5. If so, the video coder derives one or more combined bi-predictive merging candidates. The video coder includes the one or more combined bi-predictive merging candidates in the list of merging candidates.
Abstract:
Techniques are described for sub-prediction unit (PU) based motion prediction for video coding in HEVC and 3D-HEVC. In one example, the techniques include an advanced temporal motion vector prediction (TMVP) mode to predict sub-PUs of a PU in single layer coding for which motion vector refinement may be allowed. The advanced TMVP mode includes determining motion vectors for the PU in at least two stages to derive motion information for the PU that includes different motion vectors and reference indices for each of the sub-PUs of the PU. In another example, the techniques include storing separate motion information derived for each sub-PU of a current PU predicted using a sub-PU backward view synthesis prediction (BVSP) mode even after motion compensation is performed. The additional motion information stored for the current PU may be used to predict subsequent PUs for which the current PU is a neighboring block.
Abstract:
Techniques for decoding video data include receiving residual data corresponding to a block of video data, wherein the block of video data is encoded using asymmetric motion partitioning, is uni-directionally predicted using backward view synthesis prediction (BVSP), and has a size of 16×12, 12×16, 16×4 or 4×16, partitioning the block of video data into sub-blocks, each sub-block having a size of 8×4 or 4×8, deriving a disparity motion vector for each of the sub-blocks from a corresponding depth block in a depth picture corresponding to a reference picture, synthesizing a respective reference block for each of the sub-blocks using the respective derived disparity motion vector, and decoding the block of video data by performing motion compensation on each of the sub-blocks using the residual data and the synthesized respective reference blocks.
Abstract:
This disclosure describes techniques for in-loop depth map filtering for 3D video coding processes. In one example, a method of decoding video data comprises decoding a depth block corresponding to a texture block, receiving a respective indication of one or more offset values for the decoded depth block, and performing a filtering process on edge pixels of the depth block using at least one of the one or more offset values to create a filtered depth block.
Abstract:
Techniques are described for determining a block in a reference picture in a reference view based on a disparity vector for a current block. The techniques start the disparity vector from a bottom-right pixel in a center 2×2 sub-block within the current block, and determine a location within the reference picture to which the disparity vector refers. The determined block covers the location referred to by the disparity vector based on the disparity vector starting from the bottom-right pixel in the center 2×2 sub-block within the current block.
Abstract:
Techniques are described for encoding and decoding depth data for three-dimensional (3D) video data represented in a multiview plus depth format using depth coding modes that are different than high-efficiency video coding (HEVC) coding modes. Examples of additional depth intra coding modes available in a 3D-HEVC process include at least two of a Depth Modeling Mode (DMM), a Simplified Depth Coding (SDC) mode, and a Chain Coding Mode (CCM). In addition, an example of an additional depth inter coding mode includes an Inter SDC mode. In one example, the techniques include signaling depth intra coding modes used to code depth data for 3D video data in a depth modeling table that is separate from the HEVC syntax. In another example, the techniques of this disclosure include unifying signaling of residual information of depth data for 3D video data across two or more of the depth coding modes.
Abstract:
A video coder stores only one derived disparity vector (DDV) for a slice of a current picture of the video data. The video coder uses the DDV for the slice in a Neighboring Block Based Disparity Vector (NBDV) derivation process to determine a disparity vector for a particular block. Furthermore, the video coder stores, as the DDV for the slice, the disparity vector for the particular block.
Abstract:
When a current view is a dependent texture view, a current coding unit (CU) is not intra coded, and a partitioning mode of the current CU is equal to PART_2N×2N, a video coder obtains, from a bitstream that comprises an encoded representation of the video data, a weighting factor index for the current CU, wherein the current CU is in a picture belonging to a current view. When the current view is not a dependent texture view, or the current CU is intra coded, or the partitioning mode of the current CU is not equal to PART_2N×2N, the video decoder assumes that the weighting factor index is equal to a particular value that indicates that residual prediction is not applied with regard to the current CU.