Abstract:
A video coder may reconstruct a current picture of video data. A current region of the current picture is associated with a temporal index indicating a temporal layer to which the current region belongs. Furthermore, for each respective array of a plurality of arrays that correspond to different temporal layers, the video coder may store, in the respective array, sets of adaptive loop filtering (ALF) parameters used in applying ALF filters to samples of regions of pictures of the video data that are decoded prior to the current region and that are in the temporal layer corresponding to the respective array or a lower temporal layer than the temporal layer corresponding to the respective array. The video coder determines, based on a selected set of ALF parameters in the array corresponding to the temporal layer to which the current region belongs, an applicable set of ALF parameters.
Abstract:
A device for decoding video data determines mode information for a current block of a current picture of the video data; derives weights for use in a bilateral filter based on the mode information for the current block; applies the bilateral filter to a current sample of the current block by assign the weights to neighboring samples of the current sample of the current block and the current sample of the current block and modifying a sample value for the current sample based on sample values of the neighboring samples, the weights assigned to the neighboring samples, the sample value for the current sample, and the weight assigned to the current sample; and based on the modified sample value for the current sample, outputs a decoded version of the current picture.
Abstract:
To encode video data, a video encoder partitions a 2N×2N block of video data into four N×N blocks, determines encoding modes for each of the four N×N blocks, calculates values representative of encoded versions of the four N×N blocks using the encoding modes for each of the four N×N blocks, determines whether to skip testing of at least one non-square partitioning mode for the 2N×2N block based on the calculated values, and encodes the 2N×2N block based at least in part on the determination of whether to skip testing of the at least one non-square partitioning mode.
Abstract:
Techniques and systems are provided for processing video data. For example, a current block of a picture of the video data can be obtained for processing by an encoding device or a decoding device. A pre-defined set of weights for template matching based motion compensation are also obtained. A plurality of metrics associated with one or more spatially neighboring samples of the current block and one or more spatially neighboring samples of at least one reference frame are determined. A set of weights are selected from the pre-defined set of weights to use for the template matching based motion compensation. The set of weights is determined based on the plurality of metrics. The template matching based motion compensation is performed for the current block using the selected set of weights.
Abstract:
A video decoder selects a source affine block. The source affine block is an affine-coded block that spatially neighbors a current block. Additionally, the video decoder extrapolates motion vectors of control points of the source affine block to determine motion vector predictors for control points of the current block. The video decoder inserts, into an affine motion vector predictor (MVP) set candidate list, an affine MVP set that includes the motion vector predictors for the control points of the current block. The video decoder also determines, based on an index signaled in a bitstream, a selected affine MVP set in the affine MVP set candidate list. The video decoder obtains, from the bitstream, motion vector differences (MVDs) that indicate differences between motion vectors of the control points of the current block and motion vector predictors in the selected affine MVP set.
Abstract:
Techniques are described in which a video decoder is configured to partition, into a plurality of sub-blocks, a block of a picture of the video data. The video decoder is further configured to, for each respective sub-block of the plurality of sub-blocks, derive a respective first motion vector of the respective sub-block based on motion information for at least two blocks neighboring the respective sub-block. The video decoder also determines, based on a respective motion vector difference for the respective sub-block signaled in a bitstream, a second motion vector for the respective sub-block. Additionally, the video decoder generates, based on the first motion vector of the respective sub-block and the second motion vector of the respective sub-block, a predictive block for the respective sub-block.
Abstract:
A video coder uses illumination compensation (IC) to generate a non-square predictive block of a current prediction unit (PU) of a current coding unit (CU) of a current picture of the video data. In doing so, the video coder sub-samples a first set of reference samples such that a total number of reference samples in the first sub-sampled set of reference samples is equal to 2m. Additionally, the video coder sub-samples a second set of view reference samples such that a total number of reference samples in the second sub-sampled set of reference samples is equal to 2m. The video coder determines a first IC parameter based on the first sub-sampled set of reference samples and the second sub-sampled set of reference samples. The video coder uses the first IC parameter to determine a sample of the non-square predictive block.
Abstract:
In some examples, a video coder employs a two-level technique to code information that identifies a position within the block of transform coefficients of one of the coefficients that is a last significant coefficient (LSC) for the block according to a scanning order associated with the block of transform coefficients. For example, a video coder may code a sub-block position that identifies a position of one of the sub-blocks that includes the LSC within the block, and code a coefficient position that identifies a position of the LSC within the sub-block that includes the LSC.
Abstract:
A reduction in the number of binarizations and/or contexts used in context adaptive binary arithmetic coding (CABAC) for video coding is proposed. In particular, this disclosure proposes techniques that may lower the number contexts used in CABAC by up to 56.
Abstract:
A reduction in the number of binarizations and/or contexts used in context adaptive binary arithmetic coding (CABAC) for video coding is proposed. In particular, this disclosure proposes techniques that may lower the number contexts used in CABAC by up to 56.