Abstract:
Techniques and systems are provided for encoding and decoding video data. For example, a method of encoding video data including a plurality of pictures is described. The method includes performing intra-picture prediction on a block of one of the pictures to generate a prediction unit. Performing the intra-picture prediction includes selecting a reference block for intra-block copy prediction of a coding tree unit (CTU). The reference block is selected from a plurality of encoded blocks, and blocks within the CTU encoded with bi-prediction are excluded from selection as the reference block. Performing the intra-picture prediction further includes performing intra-block copy prediction with the selected reference block to generate the prediction unit. The method also includes generating syntax elements encoding the prediction unit based on the performed intra-picture prediction.
Abstract:
Techniques and systems are provided for encoding and decoding video data. For example, a method of encoding video data includes obtaining video data at an encoder, and determining to perform intra-picture prediction on the video data, using intra-block copy prediction, to generate the plurality of encoded video pictures. The method also includes performing the intra-picture prediction on the video data using the intra-block copy prediction, and, in response to determining to perform the intra-picture prediction on the video data using the intra-block copy prediction, disabling at least one of inter-picture bi-prediction or inter-picture uni-prediction for the plurality of encoded video pictures. The method also includes generating the plurality of encoded video pictures based on the received video data according to the performed intra-block copy prediction.
Abstract:
An example device for accessing image data includes a memory configured to store image data, the memory comprising a first region and a second region, and one or more processing units implemented in circuitry and configured to code most significant bits (MSBs) of a plurality of residuals of samples of a block of an image, each of the residuals representing a respective difference value between a respective raw sample value and a respective predicted value for the respective raw sample value, access the coded MSBs in the first region of the memory, determine whether to represent the residuals using both the MSBs and least significant bits (LSBs) of the plurality of residuals of the samples, and in response to determining not to represent the residuals using the LSBs, prevent access of the LSBs in a second region of the memory.
Abstract:
In general, the disclosure describes a video coding device (e.g., a video encoder or a video decoder) configured to perform various transformations on video data. The video coding device applies a primary transform to a block of the video data, the primary transform having a first size, and the sub-block being at least a portion of the block. The video coding device determines whether application of a secondary transform, having a second size, to a sub-block of the block is allowed. Application of the secondary transform is disallowed when the first size is equal to the second size. Based on the application of the secondary transform being allowed, the video coding device applies the secondary transform to the sub-block. Application of the primary transform and the secondary transform construct a residual block in a pixel domain.
Abstract:
Provided are techniques for low complexity video coding. For example, a video coder may be configured to calculate a first sum of absolute difference (SAD) value between a CU block and a corresponding block in a reference frame for the largest coding unit (LCU). The video coder may define conditions (e.g., background and/or homogeneous conditions) for the branching based at least in part on the first SAD value. The video coder may also determine the branching based on detecting the background or homogeneous condition, the branching including a first branch corresponding to both a first CU size of the CU block and a second CU size of a sub-block of the CU block. The video coder may then set the first branch to correspond to the first CU size, if the first CU size or the second CU size satisfies the background condition.
Abstract:
In an example, a method of transforming video data in video coding includes applying a first stage of a two-dimensional transform to a block of video data values to generate a block of first stage results, and applying a second stage of the two-dimensional transform to the block of first stage results without reordering the first stage results to generate a block of second stage results.
Abstract:
A device for encoding or decoding video data may clip first residual data based on a bit depth of the first residual data. The device may generate second residual data at least in part by applying an inverse Adaptive Color Transform (TACT) to the first residual data. Furthermore, the device may reconstruct, based on the second residual data, a coding block of a coding unit (CU) of the video data.
Abstract:
Provided are techniques for low complexity video coding. For example, a video coder may be configured to calculate a first sum of absolute difference (SAD) value between a CU block and a corresponding block in a reference frame for the largest coding unit (LCU). The video coder may define conditions (e.g., background and/or homogeneous conditions) for the branching based at least in part on the first SAD value. The video coder may also determine the branching based on detecting the background or homogeneous condition, the branching including a first branch corresponding to both a first CU size of the CU block and a second CU size of a sub-block of the CU block. The video coder may then set the first branch to correspond to the first CU size, if the first CU size or the second CU size satisfies the background condition.
Abstract:
A method for decoding video data provided in a bitstream, where the bitstream includes a coding unit (CU) coded in palette mode, includes: parsing a palette associated with the CU provided in the bitstream; parsing one or more run lengths provided in the bitstream that are associated with the CU; parsing one or more index values provided in the bitstream that associated with the CU; and parsing one or more escape pixel values provided in the bitstream that are associated with the CU. The escape pixel values may be parsed from consecutive positions in the bitstream, the consecutive positions being in the bitstream after all of the run lengths and the index values associated with the CU. The method may further include decoding the CU based on the parsed palette, parsed run lengths, parsed index values, and parsed escape values.
Abstract:
A device for decoding video data includes a memory configured to store video data; and one or more processors implemented in circuitry and configured to: determine a deterministic bounding box from which to retrieve reference samples of reference pictures of video data for performing decoder-side motion vector derivation (DMVD) for a current block of the video data; derive a motion vector for the current block according to DMVD using the reference samples within the deterministic bounding box; form a prediction block using the motion vector; and decode the current block using the prediction block.