Abstract:
Quantization parameter (QP) update classification techniques for display stream compression (DSC) are disclosed. In one aspect, a method for determining a quantization parameter (QP) value includes determining whether a current block includes a transition from a flat region to a complex region or is a flat block and determining whether a previous block includes a transition from a flat region to a complex region or is a flat block. The method may also include selecting a default technique or an alternative technique for calculating a QP adjustment value for the current block based on whether the previous and current blocks include a transition from a flat region to a complex region or are flat blocks.
Abstract:
A video coding device may encode a video signal using intra-block copy prediction. A first picture prediction unit of a first picture may be identified. A second picture may be coded and identified. The second picture may be temporally related to the first picture, and the second picture may include second picture prediction units. A second picture prediction unit that is collocated with the first picture prediction unit may be identified. Prediction information for the first picture prediction unit may be generated. The prediction information may be based on a block vector of the second picture prediction unit that is collocated with the first picture prediction unit.
Abstract:
동일한 상위 파라미터 세트를 가지는 적어도 둘 이상의 하위 파라미터 세트들에 공통으로 삽입된 공통 정보를 획득하고, 공통 정보를 상위 파라미터 세트 또는 하위 파라미터 세트 중 적어도 하나에 부가할지 여부를 결정하고, 결정된 결과에 따라, 공통 정보를 상위 파라미터 세트 또는 하위 파라미터 세트 중 적어도 하나에 부가하는 파라미터 세트 생성 방법이 개시된다.
Abstract:
L'invention concerne un procédé de décodage d'au moins un bloc courant codé d'une première image par rapport à un bloc de référence d'une deuxième image comportant au moins un élément commun avec la première image, le bloc de référence ayant été au préalable décodé. Le procédé comprend les étapes consistant à : filtrer (14) le bloc de référence décodé; estimer (15), uniquement sur la base du bloc de référence décodé filtré, au moins une valeur d'une caractéristique locale du bloc de référence décodé filtré; déterminer (16), en fonction de la valeur estimée de caractéristique locale : un ensemble d'informations de décodage à utiliser pour décoder le bloc courant, une méthode de décodage du bloc courant, en fonction de la valeur estimée de caractéristique locale, décoder (17) le bloc courant en fonction de l'ensemble d'informations de décodage déterminé et selon la méthode de décodage déterminée.
Abstract:
An image coding apparatus configured to divide an image into one or more slices each including a plurality of blocks and to code each slice on a block-by-block basis includes a first coding unit configured to code blocks included in a first portion of the slice, and a second coding unit configured to code blocks included in a second portion of the slice, wherein, when the second coding unit codes an initial block in the second portion, the second coding unit codes the initial included in the second portion by referring to a first quantization parameter provided to the slice as an initial value and referred to by the first coding unit when the first coding units codes the initial block in the first portion.
Abstract:
Weighted predictions may be used in a video encoder or decoder to improve the quality of motion predictions. Systems and methods of video processing with weighted predictions based on motion information are discussed. Specifically, systems and methods of video processing with iterated and refined weighted predictions based on motion information are shown.
Abstract:
In a video coding/decoding system, reference picture caches in a video coder and decoder may be partitioned dynamically based on camera and background motion can lead to improved coding efficiency and coding quality. When a camera is fixed and therefore exhibits low motion, a system may allocate larger portions of the reference picture cache to storage of long term reference frames. In this case, foreground elements of an image (for example, a person) may move in front of a relatively fixed background. Increasing the number of long term reference frames can increase the chances that, no matter where the foreground elements are within a frame currently being coded, the reference picture cache will contain at least one frame that provides an adequate prediction match to background elements within the new frame. Thus the background elements uncovered in the current frame can be coded at high quality with a low number of bits. When a camera exhibits high motion, the system may allocate larger portions of the reference picture cache to storage of short term reference frames.
Abstract:
The present invention relates to a simplified pipeline for Sample Adaptive Offset (SAO) and Adaptive Loop Filtering (ALF) in the in-loop decoding of a video encoder and a video decoder. According to the present invention, filter parameter setting regions and filtering processing windows are aligned, to reduce the required amount of memory for parameter sets necessary for delayed filtering. This is preferably achieved by a displacement of the filter parameter setting regions with respect to LCU boundaries in at least one (preferably: vertical) or both vertical and horizontal directions.
Abstract:
In one embodiment, a spatial merge mode or a temporal merge mode for a block of video content may be used in merging motion parameters. Both spatial and temporal merge parameters are considered concurrently and do not require utilization of bits or flags or indexing to signal a decoder. If the spatial merge mode is determined, the method merges the block of video content with a spatially-located block, where merging shares motion parameters between the spatially-located block and the block of video content. If the temporal merge mode is determined, the method merges the block of video content with a temporally-located block, where merging shares motion parameters between the temporally-located block and the block of video content.
Abstract:
Disclosed is a video-encoding method using prediction units based on encoding units determined in accordance with a tree structure, wherein the video encoding method involves: dividing an image of a video into one or more maximum encoding units; encoding, for each maximum encoding unit, the image on the basis of encoding units for each coded depth, which are divided hierarchically in accordance with coded depths, and on the basis of partition types determined on the basis of the coded depths of the encoding units for each coded depth to determine encoding units in accordance with a tree structure; and outputting data encoded on the basis of encoding units determined in accordance with a tree structure and on the basis of partition types, information on coded depths and encoding modes, and information on an encoding unit structure, which indicates encoding unit sizes and variable coded depths.