Abstract:
A device for decoding video data includes a memory configured to store video data, and at least one processor. The at least one processor is configured to: determine a first bit-depth of luma residual samples for a block of video data, determine a second bit-depth of predicted chroma residual samples for the block of video data, adjust the luma residual samples based on the first bit-depth and the second bit-depth to produce bit-depth adjusted luma residual samples, determine chroma residual samples for the block of video data based on the bit-depth adjusted luma residual samples and the predicted chroma residual samples, and decode the block of video data based on the luma residual samples and the chroma residual samples.
Abstract:
An apparatus configured to code video information includes a memory unit and a processor in communication with the memory unit. The memory unit is configured to store video information associated with an enhancement layer having a first block and a base layer having a second block, the second block in the base layer corresponding to the first block in the enhancement layer. The processor is configured to predict, by inter layer prediction, the first block in the enhancement layer based on information derived from the second block in the base layer. At least a portion of the second block is located outside of a reference region of the base layer, the reference region being available for use for the inter layer prediction of the first block. The processor may encode or decode the video information.
Abstract:
An apparatus for coding video information according to certain aspects includes a memory unit and a processor in communication with the memory unit. The memory unit stores video information associated with a reference layer. The processor determines a value of a current video unit based on, at least in part, a reconstruction value associated with the reference layer and an adjusted difference prediction value. The adjusted difference prediction value is equal to a difference between a prediction of a current layer and a prediction of the reference layer multiplied by a weighting factor that is different from 1.
Abstract:
An example device includes a memory device configured to store encoded video data, and processing circuitry coupled to the memory device. The processing circuitry is configured to determine that a rectangular transform unit (TU) of the stored video data includes a number of pixel rows denoted by a first integer value ‘K’ and a number of pixel columns denoted by a second integer value ‘L,’ where K has a value equal to an integer value ‘m’ left shifted by one, and where L has a value equal to an integer value ‘n’ left shifted by one, to determine that a sum of n and m is an odd number, and based on the sum of n and m being the odd number, to add a delta quantization parameter value to a quantization parameter (QP) value for the rectangular TU to obtain a modified QP value for the rectangular TU.
Abstract:
A device for decoding video data includes one or more processors configured to derive M most probable modes (MPMs) for intra prediction of a block of video data, wherein M is greater than 3. The one or more processors decode a syntax element that indicates whether a MPM index or a non-MPM index is used to indicate a selected intra prediction mode of the plurality of intra prediction modes for intra prediction of the block of video data. The one or more processors decode the indicated one of the MPM index or the non-MPM index. Furthermore, the one or more processors reconstruct the block of video data based on the selected intra prediction mode.
Abstract:
A device for decoding video data includes one or more processors configured to decode syntax information that indicates a selected intra prediction mode for a block of video data from among a plurality of intra prediction modes. The plurality of intra prediction modes includes greater than 33 angular intra prediction modes. The angular intra prediction modes defined such that interpolation is performed in 1/32 pel accuracy. The one or more processors reconstruct the block of video data based on the selected intra prediction mode.
Abstract:
A device includes one or more processors configured to derive M most probable modes (MPMs) for intra prediction of a block of video data. As part of deriving the M most probable modes, the one or more processors define a representative intra prediction mode for a left neighboring column and use the representative intra prediction mode for the left neighboring column as an MPM for the left neighboring column, and/or define a representative intra prediction mode for an above neighboring row and use the representative intra prediction mode for the above neighboring row as an MPM for the above neighboring row. A syntax element that indicates whether an MPM index or a non-MPM index is used to indicate a selected intra prediction mode for intra prediction of the block is decoded. The one or more processors reconstruct the block based on the selected intra prediction mode.
Abstract:
A video coding device includes processor(s) configured to determine, for each of a plurality of bins of a value for a syntax element of a current transform coefficient, contexts using respective corresponding bins of values for the syntax element of previously coded transform coefficients. The processor(s) are configured to determine a context for an ith bin of the value for the syntax element of the current transform coefficient using a corresponding ith bin of a value for the syntax element of a previously coded transform coefficient. To use the corresponding ith bin of the value for the syntax element of the previously coded transform coefficient, the processor(s) are configured to use only the ith bin, and no other bins, of the value for the syntax element of the previously coded transform coefficient. ‘i’ represents a non-negative integer.
Abstract:
In an example, a method of decoding video data includes selecting a motion information derivation mode from a plurality of motion information derivation modes for determining motion information for a current block, where each motion information derivation mode of the plurality comprises performing a motion search for a first set of reference data that corresponds to a second set of reference data outside of the current block, and where the motion information indicates motion of the current block relative to reference video data. The method also includes determining the motion information for the current block using the selected motion information derivation mode. The method also includes decoding the current block using the determined motion information and without decoding syntax elements representative of the motion information.
Abstract:
Examples include a device for coding video data, the device including a memory configured to store video data, and one or more processors configured to obtain adaptive loop filtering (ALF) information for a current coding tree unit (CTU) from one or more of: (i) one or more spatial neighbor CTUs of the current CTU or (ii) one or more temporal neighbor CTUs of the current CTU, to form a candidate list based at least partially on the obtained ALF information for the current CTU, and to perform a filtering operation on the current CTU using ALF information associated with a candidate from the candidate list. Coding video data includes encoding video data, decoding video data, or both encoding and decoding video data.