Abstract:
A video coder may determine reference samples based on a location of a current block of a current picture of the 360-degree video data and a packing arrangement that defines an arrangement of a plurality of regions in the current picture. The current picture is in a projected domain and each respective region of the plurality of regions is a respective face defined by a projection of 360-degree video data. The regions are arranged in the current picture according to the packing arrangement. Based on the location of the current block being at a border of the first region that is adjacent to the second region and there being a discontinuity at the border due to the packing arrangement, the reference samples are samples of the current picture that spatially neighbor the current block in a spherical domain and not in the projected domain.
Abstract:
An example method includes, receiving an encoded picture of 360-degree video data, the encoded picture of 360-degree video data being arranged in packed faces obtained from a projection of a sphere of the 360-degree video data; decoding the picture of encoded 360-degree video data to obtain a reconstructed picture of 360-degree video data, the reconstructed picture of 360-degree video data being arranged in the packed faces; padding the reconstructed picture of 360-degree video data to generate a padded reconstructed picture of 360-degree video data; in-loop filtering the padded reconstructed picture of 360-degree video data to generate a padded and filtered reconstructed picture of 360-degree video data; and storing the padded and filtered reconstructed picture of 360-degree video data in a reference picture memory for use in predicting subsequent pictures of 360-degree video data.
Abstract:
This disclosure describes techniques for generating reference frames packed with extended faces from a cubemap projection or adjusted cubemap projection of 360-degree video data. The reference frames packed with the extended faces may be used for inter-prediction of subsequent frames of 360-degree video data.
Abstract:
A device may include a video coder to determine an equivalent quantization parameter (QP) for a decoded block of video data using a quantization matrix for the decoded block of video data, determine deblocking parameters based on the determined equivalent QP, and deblock an edge of the decoded block based on the determined deblocking parameters. In particular, the video coder may determine equivalent QPs for two neighboring blocks defining a common edge, and deblock the common edge based on the equivalent QPs. The video coder may determine deblocking parameters, such as β and tc values, based on the equivalent QPs. The video coder may then deblock the common edge based on the deblocking parameters, e.g., determine whether to deblock the common edge, determine whether to apply a strong or a weak filter to the common edge, and determine a width (in number of pixels) for a weak filter.
Abstract:
Techniques for coding video data include coding a plurality of blocks of video data, wherein at least one block of the plurality of blocks of video data is coded using a coding mode that is one of an intra pulse code modulation (IPCM) coding mode and a lossless coding mode. In some examples, the lossless coding mode may use prediction. The techniques further include assigning a non-zero quantization parameter (QP) value for the at least one block coded using the coding mode. The techniques also include performing deblocking filtering on one or more of the plurality of blocks of video data based on the coding mode used to code the at least one block and the assigned non-zero QP value for the at least one block.
Abstract:
This disclosure describes techniques for performing sample adaptive offset signaling and coding in a video coding process. Techniques of the disclosure include both a merge-based and prediction-based signaling process for sample adaptive offset information (i.e., offset values and offset type). The techniques includes determining offset information for a current partition, comparing the offset information of the current partition with offset information of one or more neighbor partitions, coding a merge instruction in the case that the offset information of one of the one or more neighbor partitions is the same as the offset information of the current partition, and coding one of a plurality of prediction instructions in the case that the offset information of the one or more neighbor partitions is not the same as the offset information of the current partition.
Abstract:
In general, techniques are described for performing transform dependent de-blocking filtering, which may be implemented by a video encoding device. The video encoding device may apply a transform to a video data block to generate a block of transform coefficients, apply a quantization parameter to quantize the transform coefficients and reconstruct the block of video data from the quantized transform coefficients. The video encoding device may further determine at least one offset used in controlling de-blocking filtering based on the size of the applied transform, and perform de-blocking filtering on the reconstructed block of video data based on the determined offset. Additionally, the video encoder may specify a flag in a picture parameter set (PPS) that indicates whether the offset is specified in one or both of the PPS and a header of an independently decodable unit.
Abstract:
A device for coding video data includes a video coder configured to: determine for a chroma transform block (TB) a sub-sampling format for the chroma TB; based on the sub-sampling format for the chroma TB, identify one or more corresponding luma TBs; determine, for each of the one or more corresponding luma TBs, if the corresponding luma TB is coded using a transform skip mode; and, based on a number of the one or more corresponding luma TBs coded using the transform skip mode being greater than or equal to a threshold value, determine that the chroma TB is coded in the transform skip mode.
Abstract:
Techniques described herein are related to harmonizing the signaling of coding modes and filtering in video coding. In one example, a method of decoding video data is provided that includes decoding a first syntax element to determine whether PCM coding mode is used for one or more video blocks, wherein the PCM coding mode refers to a mode that codes pixel values as PCM samples. The method further includes decoding a second syntax element to determine whether in-loop filtering is applied to the one or more video blocks. Responsive to the first syntax element indicating that the PCM coding mode is used, the method further includes applying in-loop filtering to the one or more video blocks based at least in part on the second syntax element and decoding the one or more video blocks based at least in part on the first and second syntax elements.
Abstract:
This disclosure describes techniques for signaling deblocking filter parameters for a current slice of video data with reduced bitstream overhead. Deblocking filter parameters may be coded in one or more of a picture layer parameter set and a slice header. The techniques reduce a number of bits used to signal the deblocking filter parameters by coding a first syntax element that indicates whether deblocking filter parameters are present in both the picture layer parameter set and the slice header, and only coding a second syntax element in the slice header when both sets of deblocking filter parameters are present. Coding the second syntax element is eliminated when deblocking filter parameters are present in only one of the picture layer parameter set or the slice header. The second syntax element indicates which set of deblocking filter parameters to use to define a deblocking filter applied to a current slice.