Abstract:
Systems and methods are provided for video encoding and decoding using intra-block copy mode when constrained intra-prediction is enabled. In various implementations, a video encoding device can determine a current coding unit for a picture from a plurality of pictures. The video encoding device can further determine that constrained intra-prediction mode is enabled. The video encoding device can further encode the current coding unit using one or more reference samples. The one or more reference samples are determined based on whether a reference sample has been predicted using intra-block copy mode prediction without using any inter-predicted samples. When the reference sample is predicted using intra-block copy mode without using any inter-predicted samples, the reference sample is available for predicting the current coding unit. When the reference sample is predicted using intra-block copy mode using at least one inter-predicted sample, the reference sample is not available for predicting the coding unit.
Abstract:
An apparatus for coding video information according to certain aspects includes a memory unit and a processor in communication with the memory unit. The memory unit stores difference video information associated with a difference video layer of pixel information derived from a difference between an enhancement layer and a corresponding base layer of the video information. The processor determines an enhancement layer weight and a base layer weight, and determines a value of a current video unit based on the difference video layer, a value of a video unit in the enhancement layer weighted by the enhancement layer weight, and a value of a video unit in the base layer weighted by the base layer weight.
Abstract:
In one example, a device for coding video data includes a video coder configured to code data indicating whether tile boundaries of different layers of video data are aligned and whether inter-layer prediction is allowed along or across tile boundaries of enhancement layer blocks, code an enhancement layer block in an enhancement layer tile of the video data without using inter-layer prediction from a collocated base layer block for which inter-layer filtering or reference layer filtering across tile boundaries in a reference layer picture in an access unit including both the enhancement layer tile and the base layer block is enabled, and code the collocated base layer block.
Abstract:
A device for coding video data includes a memory storing video data and a video coder including one or more processors configured to determine a current coding unit of the video data is coded in a palette mode; determine a palette for the coding unit by, for a first entry of the palette, choosing a predictor sample from a reconstructed neighboring block of the coding unit and coding a difference between one or more color values of the first entry and one or more color values of the predictor sample.
Abstract:
A device for decoding video data is configured to determine, based on a chroma sampling format for the video data, that adaptive color transform is enabled for one or more blocks of the video data; determine a quantization parameter for the one or more blocks based on determining that the adaptive color transform is enabled; and dequantize transform coefficients based on the determined quantization parameter. A device for decoding video data is configured to determine for one or more blocks of the video data that adaptive color transform is enabled; receive in a picture parameter set, one or more offset values in response to adaptive color transform being enabled; determine a quantization parameter for a first color component of a first color space based on a first of the one or more offset values; and dequantize transform coefficients based on the quantization parameter.
Abstract:
A device for decoding video data is configured to determine for one or more blocks of the video data that adaptive color transform is enabled; determine a quantization parameter for the one or more blocks; in response to a value of the quantization parameter being below a threshold, modify the quantization parameter to determine a modified quantization parameter; and dequantize transform coefficients based on the modified quantization parameter.
Abstract:
A video coder may include a current picture and a reference picture in a reference picture list. The video coder may determine a co-located block of the reference picture. The co-located block is co-located with a current block of the current picture. Furthermore, the video coder derives a temporal motion vector predictor from the co-located block and may determine the temporal motion vector predictor has sub-pixel precision. The video coder may right-shift the temporal motion vector predictor determined to have sub-pixel precision. In addition, the video coder may determine, based on the right-shifted temporal motion vector predictor, a predictive block within the current picture.
Abstract:
A method for motion vector difference (MVD) coding of screen content video data is disclosed. In one aspect, the method includes determining an MVD between a predicted motion vector and a current motion vector and generating a binary string comprising n bins via binarizing the MVD. The method further includes determining whether an absolute value of the MVD is greater than a threshold value and encoding a subset of the n bins via an exponential Golomb code having an order that is greater than one in response to the absolute value of the MVD being greater than the threshold value.
Abstract:
Techniques and systems are provided for encoding and decoding video data. For example, a method of encoding video data including a plurality of pictures is described. The method includes performing intra-picture prediction on a block of one of the pictures to generate a prediction unit. Performing the intra-picture prediction includes selecting a reference block for intra-block copy prediction of a coding tree unit (CTU). The reference block is selected from a plurality of encoded blocks, and blocks within the CTU encoded with bi-prediction are excluded from selection as the reference block. Performing the intra-picture prediction further includes performing intra-block copy prediction with the selected reference block to generate the prediction unit. The method also includes generating syntax elements encoding the prediction unit based on the performed intra-picture prediction.
Abstract:
Techniques and systems are provided for encoding and decoding video data. For example, a method of encoding video data includes obtaining video data at an encoder, and determining to perform intra-picture prediction on the video data, using intra-block copy prediction, to generate the plurality of encoded video pictures. The method also includes performing the intra-picture prediction on the video data using the intra-block copy prediction, and, in response to determining to perform the intra-picture prediction on the video data using the intra-block copy prediction, disabling at least one of inter-picture bi-prediction or inter-picture uni-prediction for the plurality of encoded video pictures. The method also includes generating the plurality of encoded video pictures based on the received video data according to the performed intra-block copy prediction.