Abstract:
A device for video coding is configured to determine a characteristic of a predictive block of a current block of a current picture; identify a transform for decoding the current block based on the characteristic; inverse transform coefficients to determine a residual block for the current block; and add the residual block to a predictive block of the current block to decode the current block.
Abstract:
Techniques are described in which a video coder is configured to determine, using one or more characteristics of an interpolation filter, a number of reference samples to be stored at a reference buffer. The video coder is further configured to generate a plurality of values corresponding to the number of reference samples in the reference buffer. The video coder is further configured to generate prediction information for intra-prediction using the interpolation filter and the plurality of values. The video coder is further configured to reconstruct the block of video data based on the prediction information.
Abstract:
Processing circuitry of a device is configured to determine that a plurality of derived modes (DMs) available for predicting a luma block of video data are also available for predicting a chroma block of the video data, the chroma block corresponding to the luma block, to form a candidate list of prediction modes with respect to the chroma block, the candidate list including one or more DMs of the multiple DMs, to determine to code the chroma block using any DM of the candidate list. The processing circuitry may, based on the determination to code the chroma block using any DM of the one or more DMs of the candidate list, code an indication identifying a selected DM of the candidate list to be used for coding the chroma block. The processing circuitry may code the chroma block according to the selected DM of the candidate list.
Abstract:
A device for decoding video data determines that a current block of video data is coded using a linear model prediction mode; for the luma component of the current block, determines reconstructed luma samples; based on luma samples in a luma component of one or more already decoded neighboring blocks and chroma samples in a chroma component of the one or more already decoded neighboring blocks, determines values for linear parameters, wherein the luma samples in the luma component of the one or more already decoded neighboring blocks comprise luma samples from a starting line in the luma component of the one or more already decoded neighboring blocks, wherein the starting line in the luma component of the one or more already decoded neighboring blocks is at least one line removed from a border line of the luma component of the current block.
Abstract:
An example device for decoding video data includes a memory configured to store video data and one or more processors implemented in circuitry and configured to determine a maximum possible value for a secondary transform syntax element for a block of video data, entropy decode a value for the secondary transform syntax element of the block to form a binarized value representative of the secondary transform for the block, reverse binarize the value for the secondary transform syntax element using a common binarization scheme regardless of the maximum possible value to determine the secondary transform for the block, and inverse-transform transform coefficients of the block using the determined secondary transform.
Abstract:
An example device for decoding encoded video data includes storage media and processing circuitry. The storage media are configured a portion of the encoded video data. The processing circuitry is configured to determine a block-level threshold for the portion of the encoded video data stored to the storage media, to determine that an encoded block of the portion of the encoded video data has a size that is equal to or greater than the threshold, to receive a syntax element indicating that a portion of the encoded block is to be reconstructed using a coding tool, to determine, based on the encoded block having the size that is equal to or greater than the threshold, that the syntax element applies to all samples of a plurality of samples included in the encoded block, and to reconstruct the encoded block based on the coding tool.
Abstract:
As part of a video encoding or decoding process, a device applies a transformation to input data elements to derive output data elements for a current block. The transformation comprises a sequence of vector transformations. For each respective vector transformation of the sequence of vector transformations other than a first vector transformation of the sequence of vector transformations, input values for the respective vector transformation comprise output values of the respective previous vector transformation of the sequence of vector transformations. Each respective vector transformation of the sequence of vector transformations further takes, as input, a respective parameter vector for the respective vector transformation, the respective parameter vector for the respective vector transformation comprising one or more parameters.
Abstract:
A video coder determines a coding unit (CU) is partitioned into transform units (TUs) of the CU based on a tree structure. As part of determining the CU is partitioned into the TUs of the CU based on the tree structure, the video coder determines that a node in the tree structure has exactly two child nodes in the tree structure. A root node of the tree structure corresponds to a coding block of the CU, each respective non-root node of the tree structure corresponds to a respective block that is a partition of a block that corresponds to a parent node of the respective non-root node, and leaf nodes of the tree structure correspond to the TUs of the CU.
Abstract:
An example method of decoding video data includes obtaining, from a coded video bitstream and for a current block of the video data, an indication of an intra-prediction mode that identifies an initial predictive block; filtering, in parallel, samples in a current line of a plurality of lines of the initial predictive block based on filtered values of samples in a preceding line of the plurality of lines and unfiltered values of samples in the current line to generate filtered values for samples for the current line; and reconstructing, using intra prediction, values of samples of the current block based on the filtered values of the samples of the current initial predictive block and residual data for the current block that represents a difference between the filtered values of the samples of the current initial predictive block and the values of samples of the current block.
Abstract:
This disclosure provides systems and methods for low complexity quarter pel generation in motion search for video coding. The method can include storing full-pixel position information related to a plurality of rows of video information of a reference frame in a memory. The method can also include applying a vertical interpolation filter to the full-pixel position information for video information related to the reference frame to determine a first sub-pel position information. The method can also include applying a horizontal interpolation filter to the first sub-pel position information to determine a second sub-pel position information for the every other row of video data. The method can also include generating a syntax element indicating pixel motion of a current frame based on the first sub-pel position information. The method can also include encoding a block based on the generated syntax element.