Abstract:
Embodiments of the present invention provide multi-view video coding and coding methods and corresponding apparatuses. The multi-view video coding method includes: minimizing an error between a currently coded view image and a warped view image of a front view image to obtain an optimal warping offset; calculating disparity information between the front view image and the currently coded view image by using the optimal warping offset, a camera parameter of a view, and depth image information of the front view image; and calculating the warped view image of the front view image by using the disparity information and the front view image, and predicting a current view image by using the warped view image as a prediction signal.
Abstract:
An encoder is provided that comprises a partitioner and an entropy coder. The partitioner is configured to receive a current block of the frame and obtain a list of candidate geometric partitioning (GP) lines. Each of the candidate GP lines is generated based on information of one or more candidate neighbor blocks of the current block. The partitioner is further configured to determine a final GP line that partitions the current block into two segments, select a GP line from the list of GP lines to obtain a selected GP line, and generate a GP parameter for the current block. The GP parameter includes an offset information indicating an offset between the final GP line and the selected GP line. The entropy coder is configured to encode the GP parameter.
Abstract:
An apparatus, a method, and a computer program performs image coding with selective loop-filtering. That is, the loop-filters which operate on samples across discontinuous face boundaries are capable of being disabled. The loop-filter operation may be deferred until all samples across a face boundary are known. Then, the loop-filter can use the correct samples according to the 3D arrangement. This may be implemented on the coding block level or at a higher level.
Abstract:
A system for encoding and decoding a video coding block of a multi-view video signal is provided. A decoder is configured to decode a texture-depth video coding block (t0, d0) of a first texture frame and a first depth map associated with a first view for providing a decoded texture-depth video coding block (t0, d0) and the first depth map. A synthesized predicted texture-depth video coding block (tsyn, dsyn) of a view synthesis texture frame and a view synthesis depth map associated with a second view is generated. An inpainted synthesized predicted texture-depth video coding block is generated. Based on the impainted predicted texture-depth video block, the decoder reconstructs a texture-depth video coding block (t1, d1) of a second texture frame and a second depth map associated with the second view. An encoder is configured to encode the texture-depth video coding block in a manner that complements the decoding provided by the decoder.
Abstract:
A method for encoding a video signal includes generating an extension region of a first face of a reference frame, where the extension region includes a plurality of extension samples, and a sample value of each extension sample is based on a sample value of a sample of a second face of the reference frame, determining a use of an extension region, providing, based on the use, picture level extension usage information based on the extension region, and encoding the picture level extension usage information into an encoded video signal.
Abstract:
The present disclosure provides a video encoder and a video decoder, which may both be used for partitioning a block in a current picture based on at least one partitioning predictor. The encoder and decoder are configured to select at least one reference picture and a plurality of blocks in the at least one reference picture. Further, to calculate, for each selected block, a projected location in the current picture based on a motion vector associated to the selected block in the reference picture. Then, they are configured to determine each selected block.of which the projected location spatially overlaps with the block in the current picture, to be a reference block, and generate for at least one reference block a partitioning predictor based on partitioning information associated to, for example stored in, the at least one reference picture.
Abstract:
A decoding apparatus partitions a video coding block based on coding information into two or more segments including a first segment and a second segment. The coding information comprises a first segment motion vector associated with the first segment and a second segment motion vector associated with the second segment. A co-located first segment in a first reference frame is determined based on the first segment motion vector and a co-located second segment in a second reference frame is determined based on the second segment motion vector. A predicted video coding block is generated based on the co-located first segment and the co-located second segment. A divergence measure is determined based on the first segment motion vector and the second segment motion vector and a first or second filter is applied depending on the divergence measure to the predicted video coding block.
Abstract:
A method, an apparatus and a system for a rapid motion search applied in template matching are disclosed. The method includes: selecting motion vectors of blocks related to a current block as candidate motion vectors of the current block; after the uniqueness of a series of the candidate motion vectors of the current block is maintained, calculating the cost function of the candidate motion vectors in a corresponding template area of a reference frame, and obtaining the motion vector of the best matching template from the candidate motion vectors of the current block. In the embodiments of the present invention, there is no need to determine a large search range and no need to determine the corresponding search path template, and it is only necessary to perform a search in a smaller range.
Abstract:
Inter-frame prediction coding method, device and system are provided. The inter-frame prediction coding method includes: calculating distortions between a template area of current encoding block and each of M matching templates in L reference frames, to determine M offset vectors; acquiring M hypothesis prediction values of the encoding block to which the M matching templates correspond according to the determined M offset vectors, and calculating the template matching prediction value of the current encoding block according to the M hypothesis prediction values; comparing the template matching prediction value and original value of the current encoding block to acquire the residual of the current encoding block, and encoding the residual. The technical solution improves prediction performance of the video coding system and increases coding efficiency.
Abstract:
The present disclosure provides a video encoder and a video decoder, which may both be used for partitioning a block in a current picture based on at least one partitioning predictor. The encoder and decoder are configured to select at least one reference picture and a plurality of blocks in the at least one reference picture. Further, to calculate, for each selected block, a projected location in the current picture based on a motion vector associated to the selected block in the reference picture. Then, they are configured to determine each selected block, of which the projected location spatially overlaps with the block in the current picture, to be a reference block, and generate for at least one reference block a partitioning predictor based on partitioning information associated to, for example stored in, the at least one reference picture.