Abstract:
An example method includes determining whether an encoded block of residual video data was encoded losslessly in accordance with a lossless coding mode, based on whether transform operations were skipped during encoding of the block of residual video data, and if the block of residual video data was encoded losslessly, then decoding the encoded block of residual video data according to the lossless coding mode to form a reconstructed block of residual video data, where decoding the encoded block of residual data comprises bypassing quantization and sign hiding while decoding the encoded block of residual video data, and bypassing all loop filters with respect to the reconstructed block of residual video data.
Abstract:
Techniques are described for providing continuous control of a deblocking filter for a video block using a beta offset parameter. Deblocking filters are defined based on one or more deblocking decisions. Conventionally, a quantization parameter and a beta offset parameter are used to identify a beta parameter (“β”) value that determines threshold values of the deblocking decisions. The value of the beta offset parameter results in a change or increment of the β value. For small increments of the β value, rounding of the threshold values may result in no change and discontinuous control of the deblocking decisions. The techniques include calculating at least one deblocking decision for the deblocking filter according to a threshold value that has been modified based on a multiplier value of the beta offset parameter. The multiplier value applied to the beta offset parameter causes an integer change in the modified threshold value.
Abstract:
Techniques are described for determining a scan order for transform coefficients of a block. The techniques may determine context for encoding or decoding significance syntax elements for the transform coefficients based on the determined scan order. A video encoder may encode the significance syntax elements and a video decoder may decode the significance syntax elements based on the determined contexts.
Abstract:
In some embodiments of a video coder, if some prediction information is not available for a first block in a current layer, the video coder uses corresponding information (e.g., intra prediction direction and motion information), if available, from the first block's co-located second block in the base layer as if it were the prediction information for the first block. The corresponding information can then be used in the current layer to determine the prediction information of succeeding blocks in the current layer.
Abstract:
Techniques are described for a video coder (e.g., video encoder or video decoder) that is configured to select a context pattern from a plurality of context patterns that are the same for a plurality of scan types. Techniques are also described for a video coder that is configured to select a context pattern that is stored as a one-dimensional context pattern and identifies contexts for two or more scan types.
Abstract:
A video encoder signals, in a bitstream, a syntax element that indicates whether a current video unit is predicted from a VSP picture. The current video unit is a macroblock or a macroblock partition. The video encoder determines, based at least in part on whether the current video unit is predicted from the VSP picture, whether to signal, in the bitstream, motion information for the current video unit. A video decoder decodes the syntax element from the bitstream and determines, based at least in part on the syntax element, whether the bitstream includes the motion information.
Abstract:
In one example, a device includes a video coder configured to determine a context for entropy coding a bin of a value indicative of a last significant coefficient of a block of video data using a function of an index of the bin, and code the bin using the determined context. The video coder may encode or decode the bin using context-adaptive binary arithmetic coding (CABAC). The function may also depend on a size of the block. In this manner, a table indicating context indexes for the contexts need not be stored by the device.
Abstract:
Techniques are described for determining a disparity vector for a current block based on disparity motion vectors of one or more spatially and temporally neighboring regions to a current block to be predicted. The spatially and temporally neighboring regions include one or a plurality of blocks, and the disparity motion vector represents a single vector in one reference picture list for the plurality of blocks within the spatially or temporally neighboring region. The determined disparity vector could be used to coding tools which utilize the information between different views such as merge mode, advanced motion vector prediction (AMVP) mode, inter-view motion prediction, and inter-view residual prediction.