Abstract:
In accordance with one or more techniques of this disclosure, a video coder may divide a current prediction unit (PU) into a plurality of sub-PUs. Each of the sub-PUs may have a size smaller than a size of the PU. Furthermore, the current PU may be in a depth view of the multi-view video data. For each respective sub-PU from the plurality of sub-PUs, the video coder may identify a reference block for the respective sub-PU. The reference block may be co-located with the respective sub-PU in a texture view corresponding to the depth view. The video coder may use motion parameters of the identified reference block for the respective sub-PU to determine motion parameters for the respective sub-PU.
Abstract:
A device for processing video data stores one or more context statuses for a binary arithmetic coder at a bit depth of K; initializes an N-bit binary arithmetic coder with values for context variables for one of the one or more stored context statuses from previously coded blocks; codes the one or more blocks of the video data with the initialized N-bit binary arithmetic coder, wherein N and K are both positive integer values and K is smaller than N. A device for processing video data determines a set of one or more fixed filters with K-bit precision and determines a set of one or more derived filters with N-bit precision based on the set of one or more fixed filters with K-bit precision, wherein K and N are integers and K is less than N.
Abstract:
A video decoder selects a source affine block. The source affine block is an affine-coded block that spatially neighbors a current block. Additionally, the video decoder extrapolates motion vectors of control points of the source affine block to determine motion vector predictors for control points of the current block. The video decoder inserts, into an affine motion vector predictor (MVP) set candidate list, an affine MVP set that includes the motion vector predictors for the control points of the current block. The video decoder also determines, based on an index signaled in a bitstream, a selected affine MVP set in the affine MVP set candidate list. The video decoder obtains, from the bitstream, motion vector differences (MVDs) that indicate differences between motion vectors of the control points of the current block and motion vector predictors in the selected affine MVP set.
Abstract:
An example device for decoding video data includes a video decoder configured to decode one or more syntax elements at a region-tree level of a region-tree of a tree data structure for a coding tree block (CTB) of video data, the region-tree having one or more region-tree nodes including region-tree leaf and non-leaf nodes, each of the region-tree non-leaf nodes having at least four child region-tree nodes, decode one or more syntax elements at a prediction-tree level for each of the region-tree leaf nodes of one or more prediction trees of the tree data structure for the CTB, the prediction trees each having one or more prediction-tree leaf and non-leaf nodes, each of the prediction-tree non-leaf nodes having at least two child prediction-tree nodes, each of the prediction leaf nodes defining respective coding units (CUs), and decode video data for each of the CUs.
Abstract:
Techniques are described for sub-prediction unit (PU) based motion prediction for video coding in HEVC and 3D-HEVC. In one example, the techniques include an advanced temporal motion vector prediction (TMVP) mode to predict sub-PUs of a PU in single layer coding for which motion vector refinement may be allowed. The advanced TMVP mode includes determining motion vectors for the PU in at least two stages to derive motion information for the PU that includes different motion vectors and reference indices for each of the sub-PUs of the PU. In another example, the techniques include storing separate motion information derived for each sub-PU of a current PU predicted using a sub-PU backward view synthesis prediction (BVSP) mode even after motion compensation is performed. The additional motion information stored for the current PU may be used to predict subsequent PUs for which the current PU is a neighboring block.
Abstract:
An example video coding device is configured to determine a depth value associated with a block of video data included in a dependent depth view, based on one or more neighboring pixels positioned adjacent to the block of video data in the dependent depth view, and generate a disparity vector associated with the block of video data, based at least in part on the determined depth value associated with the block of video data. The video coding device may further be configured to use the disparity vector to generate an inter-view disparity motion vector candidate (IDMVC), generate an inter-view predicted motion vector candidate (IPMVC) associated with the block of video data based on a corresponding block of video data in a base view, and determine whether to add any of the IDMVC and the IPMVC to a merge candidate list associated with the block of video data.
Abstract:
A video coder uses illumination compensation (IC) to generate a non-square predictive block of a current prediction unit (PU) of a current coding unit (CU) of a current picture of the video data. In doing so, the video coder sub-samples a first set of reference samples such that a total number of reference samples in the first sub-sampled set of reference samples is equal to 2m. Additionally, the video coder sub-samples a second set of view reference samples such that a total number of reference samples in the second sub-sampled set of reference samples is equal to 2m. The video coder determines a first IC parameter based on the first sub-sampled set of reference samples and the second sub-sampled set of reference samples. The video coder uses the first IC parameter to determine a sample of the non-square predictive block.
Abstract:
As part of a video encoding process or a video decoding process, a video coder may determine a first available disparity motion vector among spatial neighboring blocks of a current block of the video data. Furthermore, the video coder may shift a horizontal component of the first available disparity motion vector to derive a shifted disparity motion vector candidate (DSMV). The video coder may add the DSMV into a merge candidate list.
Abstract:
A device for decoding video data includes a memory configured to store the video data; and one or more processors configured to decode syntax information that indicates a selected intra prediction mode for the block of video data from among a plurality of intra prediction modes. The one or more processors apply an N-tap intra interpolation filter to neighboring reconstructed samples of the block of video data according to the selected intra prediction mode, wherein N is greater than 2. The one or more processors reconstruct the block of video data based on the filtered neighboring reconstructed samples according to the selected intra prediction mode.
Abstract:
A video coding device includes a memory configured to store video data and processor(s) configured to process at least a portion of the stored video data. The processor(s) are configured to identify a coefficient group (CG) that includes a current transform coefficient of the video data, the CG representing a subset of transform coefficients within a transform unit. The processor(s) are further configured to determine a size of the CG based on a combination of a transform size and one or both of (i) a coding mode associated with the transform unit, or (ii) a transform matrix associated with the transform unit.