Abstract:
Methods and Apparatus of managing decoded picture buffer for a video decoding system using Intra Block Copy (IBC) mode. In one embodiment, one or more previously reconstructed pictures after in-loop filtering are stored in a DPB (decoded picture buffer). For decoding a current picture, a first and a second picture buffers are allocated in the DPB. Both unfiltered version and filtered version of reconstructed current picture are stored in the first and second picture buffers. After the current picture is decoded, the unfiltered version is removed from the DPB. In another embodiment, an unfiltered version and filtered version of reconstructed current picture are stored. One of them is stored in the DPB and the other is stored in a temporary buffer. After the current picture is decoded, the unfiltered version is removed from the DPB or the temporary buffer.
Abstract:
본 발명에 따른 비디오 신호 처리 방법은 현재 픽쳐에 관한 현재 픽쳐 레퍼런스 플래그에 기초하여 참조 픽쳐 리스트를 생성하고, 현재 픽쳐 내의 현재 블록에 관한 모션 정보를 획득하며, 현재 픽쳐에 관한 참조 픽쳐 리스트와 상기 현재 블록의 모션 정보를 이용하여 현재 블록을 복원하고, 상기 복원된 현재 블록에 디블록킹 필터를 적용하는 것을 특징으로 한다.
Abstract:
A method and apparatus for inter-view ARP (advanced residual prediction) are disclosed. According to one embodiment, a first inter-view reference block of a first inter-view reference picture in a first reference view is determined using a current MV (motion vector) of the current block in an inter-view direction. A first MV associated with the first inter-view reference block is derived. If the first MV points to a second inter-view reference picture in a second reference view, the derived MV is set to a default derived MV. A second temporal reference block in the second temporal reference picture corresponding to the current block is identified using the derived MV. An inter-view residual predictor corresponding to the difference between a second inter-view reference block in the first reference view and the second temporal reference block is generated and used as a predictor for the current inter-view residual of the current block.
Abstract:
An example method for encoding or decoding video data includes storing, by a video coder and in a reference picture buffer, a version of a current picture of the video data, including the current picture in a reference picture list (RPL) used to predict the current picture, and coding, by the video coder and based on the RPL, a block of video data in the current picture based on a predictor block of video data included in the version of the current picture stored in the reference picture buffer.
Abstract:
A computer-implemented method for video coding comprising obtaining frames of pixel data and having a current frame and a decoded reference frame to use as a motion compensation reference frame for the current frame, forming a warped global compensated reference frame by displacing at least one portion of the decoded reference frame by using global motion trajectories, determining a motion vector indicating the motion of the at least one portion and motion from a position based on the warped global compensated reference frame to a position at the current frame, and forming a prediction portion based, at least in part, on the motion vectors and corresponding to a portion on the current frame.
Abstract:
Particular embodiments provide a variable, BitDepth, that may be set at a value based on a number of bits used to represent pixels in pictures of a video. The variable may be used in syntax elements in HEVC, such as the HEVC range extension, but other coding standards may be used. By using the variable, different resolutions for the video may be accommodated during the encoding and decoding process. For example, the number of pixels in the pictures may be represented by 8 bits, 10 bits, 12 bits, or another number of bits depending on the resolution. Using the BitDepth variable in the syntax provides flexibility in the motion estimation and motion compensation process. For example, syntax elements used in the weighted prediction process may take into account different numbers of bits used to represent the pictures.
Abstract:
Particular embodiments may remove a condition check in the semantics for checking a high-precision data flag. This simplifies the semantics used in the encoding and decoding process. In this case, even if the high-precision data flag is not set, the value of the weighted prediction syntax element is set by the BitDepth variable. However, even if the BitDepth is not considered high-precision data, such as 8 bits, the range for the weighted prediction syntax element is still the same as the fixed value. For example, the syntax elements luma_offset_l0[ i ], luma_offset_l1[ i ], delta_chroma_offset_l0[i][j], and delta_chroma_offset_l1[i][j] use the variable BitDepth as described above whether the flag extended_precision_processing_flag is enabled and not enabled to indicate whether the bit depth is above a threshold.