Abstract:
An apparatus for coding video information according to certain aspects includes a memory and a processor. The memory is configured to store video information associated with one or more layers. The processor is configured to code a current access unit (AU) in a bitstream including a plurality of layers, the plurality of layers including a reference layer and at least one corresponding enhancement layer. The processor is further configured to code a first end of sequence (EOS) network abstraction layer (NAL) unit associated with the reference layer in the current AU, the first EOS NAL unit having the same layer identifier (ID) as the reference layer. The processor is also configured to code a second EOS NAL unit associated with the enhancement layer in the current AU, the second EOS NAL unit having the same layer ID as the enhancement layer.
Abstract:
A computing device determines whether a prediction unit (PU) in a B slice is restricted to uni-directional inter prediction. In addition, the computing device generates a merge candidate list for the PU and determines a selected merge candidate in the merge candidate list. If the PU is restricted to uni-directional inter prediction, the computing device generates a predictive video block for the PU based on no more than one reference block associated with motion information specified by the selected merge candidate. If the PU is not restricted to uni-directional inter prediction, the computing device generates the predictive video block for the PU based on one or more reference blocks associated with the motion information specified by the selected merge candidate.
Abstract:
A method of coding video data can include receiving video information associated with a reference layer, an enhancement layer, or both, and generating a plurality of inter-layer reference pictures using a plurality of inter-layer filters and one or more reference layer pictures. The generated plurality of inter-layer reference pictures may be inserted into a reference picture list. A current picture in the enhancement layer may be coded using the reference picture list. The inter-layer filters may comprise default inter-layer filters or alternative inter-layer filters signaled in a sequence parameter set, video parameter set, or slice header.
Abstract:
An apparatus configured to code video information includes a memory unit and a processor in communication with the memory unit. The memory unit is configured to store video information associated with an enhancement layer having a first block and a base layer having a second block, the second block in the base layer corresponding to the first block in the enhancement layer. The processor is configured to predict, by inter layer prediction, the first block in the enhancement layer based on information derived from the second block in the base layer. At least a portion of the second block is located outside of a reference region of the base layer, the reference region being available for use for the inter layer prediction of the first block. The processor may encode or decode the video information.
Abstract:
In one example, a device for coding video data includes a video coder configured to determine values for coded sub-block flags of one or more neighboring sub-blocks to a current sub-block, determine a context for coding a transform coefficient of the current sub-block based on the values for the coded sub-block flags, and entropy code the transform coefficient using the determined context.
Abstract:
Techniques and systems are provided for encoding video data. For example, restrictions on certain prediction modes can be applied for video coding. A restriction can be imposed that prevents inter-prediction bi-prediction from being performed on video data when certain conditions are met. For example, bi-prediction restriction can be based on whether intra-block copy prediction is enabled for one or more coding units or blocks of the video data, whether a value of a syntax element indicates that one or more motion vectors are in non-integer accuracy, whether both motion vectors of a bi-prediction block are in non-integer accuracy, whether the motion vectors of a bi-prediction block are not identical and/or are not from the same reference index, or any combination thereof. If one or more of these conditions are met, the restriction on bi-prediction can be applied, preventing bi-prediction from being performed on certain coding units or blocks.
Abstract:
A method of coding delta quantization parameter values is described. In one example a video decoder may receive a delta quantization parameter (dQP) value for a current quantization block of video data, wherein the dQP value is received whether or not there are non-zero transform coefficients in the current quantization block. In another example, a video decoder may receive the dQP value for the current quantization block of video data only in the case that the QP Predictor for the current quantization block has a value of zero, and infer the dQP value to be zero in the case that the QP Predictor for the current quantization block has a non-zero value, and there are no non-zero transform coefficients in the current quantization block.
Abstract:
A device for decoding video data is configured to determine, based on first entropy encoded data in the bitstream, a set of run-related syntax element groups for a current block of a current picture of the video data; determine, based on second entropy encoded data the bitstream, a set of palette index syntax elements for the current block, the first entropy encoded data occurring in the bitstream before the second entropy encoded data, wherein: each respective run-related syntax element group of the set of run-related syntax element groups indicates a respective type of a respective run of identical palette mode type indicators and a respective length of the respective run and each respective palette index syntax element of the set of palette index syntax elements indicates an entry in a palette comprising a set of sample values; and reconstruct, based on the sample values in the palette, the current block.
Abstract:
An example method of encoding video data includes determining a resolution that will be used for a motion vector that identifies a predictor block in a current picture of video data for a current block in the current picture of video data; determining, based on the determined resolution, a search region for the current block such that a size of the search region is smaller where the resolution is fractional-pixel than where the resolution is integer-pixel; selecting, from within the search region, a predictor block for the current block; determining the motion vector that identifies the selected predictor block for the current block; and encoding, in a coded video bitstream, a representation of the motion vector.
Abstract:
In one example, a device includes a video coder configured to determine, for each reference picture in one or more reference picture lists for a current picture, whether the reference picture is to be included in a plurality of reference pictures based on types for the reference pictures in the reference picture lists, compare picture order count (POC) values of each of the plurality of reference pictures to a POC value of the current picture to determine a motion vector predictor for a current block based on motion vectors of a co-located block of video data in a reference picture of the plurality of reference pictures, determine whether a forward motion vector or a backward motion vector of the co-located block is to be initially used to derive the motion vector predictor, and code a motion vector for the current block of video data relative to the motion vector predictor.