METHOD AND DEVICE FOR ENCODING OR DECODING BASED ON INTER-FRAME PREDICTION

    公开(公告)号:US20230109825A1

    公开(公告)日:2023-04-13

    申请号:US17768212

    申请日:2019-10-25

    摘要: A method and a device for encoding or decoding based on an inter-frame prediction. The method includes steps of: determining a temporal motion vector prediction value of a to-be-processed coding unit, where the temporal motion vector prediction value is a temporal motion vector prediction value of a sub-block, a temporal motion vector of which is obtainable through prediction, in sub-blocks adjacent to the to-be-processed coding unit and/or sub-blocks in the to-be-processed coding unit; determining a motion vector residual prediction value of the to-be-processed coding unit according to the temporal motion vector prediction value; determining a motion vector of a sub-block in the to-be-processed coding unit according to the temporal motion vector prediction value and the motion vector residual prediction value and performing a motion compensation according to the motion vector of the sub-block in the to-be-processed coding unit to determine a prediction block of the to-be-processed coding unit.

    METHOD, SYSTEM, DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM FOR INVERSE QUANTIZATION

    公开(公告)号:US20220116607A1

    公开(公告)日:2022-04-14

    申请号:US16610467

    申请日:2019-03-06

    摘要: The application discloses a method, system, device and computer-readable storage medium for inverse quantization, wherein, in some embodiments, determining an initial weighted inverse quantization matrix, wherein, the initial weighted inverse quantization matrix is the same as the quantized block in size; setting some matrix elements in the initial weighted inverse quantization matrix to zero to obtain a weighted inverse quantization matrix, wherein, determining the matrix elements that need to be zeroed according to the size of the quantized block; weighted inverse quantizing the quantized coefficients in the quantized block to generate corresponding inverse transform coefficients, wherein, the value of the matrix element corresponding to the position of the quantized coefficient in the weighted inverse quantization matrix is used as a weight coefficient of the weighted inverse quantization.

    Intra-frame and Inter-frame Combined Prediction Method for P Frames or B Frames

    公开(公告)号:US20200314432A1

    公开(公告)日:2020-10-01

    申请号:US16629777

    申请日:2018-09-25

    摘要: An intra-frame and inter-frame combined prediction method for P frames or B frames. The method comprises: self-adaptively selecting by means of a rate-distortion optimization (RDO) decision whether to use the intra-frame and inter-frame combined prediction or not; using a method for weighting an intra prediction block and an inter prediction block in the intra-frame and inter-frame combined prediction to obtain a final prediction block; and obtaining the weighting coefficient of the intra prediction block and the inter prediction block according to prediction distortion statistics of the prediction method. Therefore, prediction precision can be improved, and coding and decoding efficiency of the prediction blocks are improved. The advantages of intra prediction and inter prediction are fully utilized in the present invention; and the optimal prediction parts of the two methods are selected to be combined, so that to a certain extent, areas with excessive distortion can be removed out of the intra prediction block and the inter prediction block, thus obtaining a better prediction effect and achieving excellent practicality and robustness.

    VIRTUAL VIEWPOINT SYNTHESIS METHOD BASED ON LOCAL IMAGE SEGMENTATION

    公开(公告)号:US20200099911A1

    公开(公告)日:2020-03-26

    申请号:US16489209

    申请日:2017-08-14

    摘要: Disclosed is a virtual viewpoint synthesis method based on image local segmentation, which relates to the digital image processing technology. By mapping the input left and right images to the virtual viewpoints so as to be fused to obtain a synthesized image, smoothing and denoising the rough and noisy depth maps based on the object segmentation information of the scene, the method as disclosed solves the occlusion issue through local area segmentation during the process of viewpoint synthesis, which may guarantee that the subjective quality of viewpoint synthesis will not be significantly deteriorated when the depth map has a relatively large flaw, and maintain geometric information of the scene to the utmost extent so as to generate a real immersive sense, thereby ameliorating the drawback of significant deterioration of synthesis quality in conventional methods when the depth information of the scene has errors and noises, and offering a relatively strong robustness to the errors in the depth map information of the scene. The disclosed method may be applied to a video surveillance system and image processing software, etc.

    ENCODING METHOD, DECODING METHOD, ENCODER, AND DECODER

    公开(公告)号:US20190373281A1

    公开(公告)日:2019-12-05

    申请号:US16474879

    申请日:2017-07-24

    摘要: The present disclosure provides an encoding method, a decoding method, an encoder, and a decoder, the encoding method comprises: performing interframe prediction to each interframe coded block to obtain corresponding interframe predicted blocks; writing information of each of the interframe predicted blocks into a code stream; if the interframe coded block exists at an adjacent position to the right or beneath or to the lower right of the intraframe coded block, performing intraframe prediction to the intraframe coded block based on at least one reconstructed coded blocks at adjacent positions to the left and/or above and/or to the upper left of the intraframe coded block and at least one of the interframe coded blocks at adjacent positions to the right and/or beneath and/or to the lower right of the intraframe coded block to obtain intraframe predicted blocks; writing information of each of the intraframe predicted blocks into the code stream.

    METHOD AND DEVICE FOR VIDEO ENCODING OR DECODING BASED ON DICTIONARY DATABASE
    7.
    发明申请
    METHOD AND DEVICE FOR VIDEO ENCODING OR DECODING BASED ON DICTIONARY DATABASE 审中-公开
    用于基于字典数据库的视频编码或解码的方法和装置

    公开(公告)号:US20160212448A1

    公开(公告)日:2016-07-21

    申请号:US15081930

    申请日:2016-03-27

    摘要: A method for video encoding based on a dictionary database, the method including: 1) dividing a current image frame to be encoded in a video stream into a plurality of image blocks; 2) recovering encoding distortion information of a decoded and reconstructed image of a previous frame of the current image frame using a texture dictionary database to obtain an image with recovered encoding distortion information, and performing temporal prediction using the image with the recovered encoding distortion information as a reference image to obtain prediction blocks of image blocks to be encoded; in which, the texture dictionary database includes: clear image dictionaries and distorted image dictionaries corresponding to the clear image dictionaries; and 3) performing subtraction between the image blocks to be encoded and the prediction blocks to obtain residual blocks, and processing the residual blocks to obtain a video bit stream.

    摘要翻译: 一种基于字典数据库的视频编码方法,该方法包括:1)将视频流中要编码的当前图像帧划分为多个图像块; 2)使用纹理字典数据库恢复当前图像帧的先前帧的解码和重建图像的编码失真信息,以获得具有恢复的编码失真信息的图像,并且使用具有恢复的编码失真信息的图像来执行时间预测, 参考图像以获得要编码的图像块的预测块; 其中,纹理字典数据库包括:清晰的图像字典和与清晰的图像字典对应的扭曲的图像字典; 以及3)在要编码的图像块和预测块之间进行减法以获得残余块,并且处理残余块以获得视频比特流。

    METHOD AND DEVICE FOR VIDEO ENCODING OR DECODING BASED ON IMAGE SUPER-RESOLUTION
    8.
    发明申请
    METHOD AND DEVICE FOR VIDEO ENCODING OR DECODING BASED ON IMAGE SUPER-RESOLUTION 有权
    基于图像超分辨率的视频编码或解码的方法和装置

    公开(公告)号:US20160191940A1

    公开(公告)日:2016-06-30

    申请号:US15060627

    申请日:2016-03-04

    摘要: A method for video encoding based on an image super-resolution, the method including: 1) performing super-resolution interpolation on a video image to be encoded using a pre-trained texture dictionary database to yield a reference image; in which the texture dictionary database includes: one or multiple dictionary bases, and each dictionary basis includes a mapping group formed by a relatively high resolution image block of a training image and a relatively low resolution image block corresponding to the relatively high resolution image block; 2) performing motion estimation and motion compensation of image blocks of the video image on the reference image to acquire prediction blocks corresponding to the image blocks of the video image; 3) performing subtraction between the image blocks of the video image and the corresponding prediction blocks to yield prediction residual blocks, respectively; and 4) encoding the prediction residual blocks.

    摘要翻译: 一种基于图像超分辨率的视频编码方法,所述方法包括:1)使用预先训练的纹理词典数据库对待编码的视频图像执行超分辨率插值,以产生参考图像; 其中纹理字典数据库包括:一个或多个字典基础,并且每个字典基础包括由训练图像的相对高分辨率图像块和对应于相对高分辨率图像块的相对低分辨率图像块形成的映射组; 2)对参考图像执行视频图像的图像块的运动估计和运动补偿,以获取与视频图像的图像块相对应的预测块; 3)在视频图像的图像块和相应的预测块之间进行减法,分别产生预测残差块; 和4)编码预测残差块。

    VIRTUAL VIEWPOINT SYNTHESIS METHOD AND SYSTEM
    9.
    发明申请
    VIRTUAL VIEWPOINT SYNTHESIS METHOD AND SYSTEM 有权
    虚拟视点综合方法与系统

    公开(公告)号:US20160150208A1

    公开(公告)日:2016-05-26

    申请号:US15009854

    申请日:2016-01-29

    IPC分类号: H04N13/00

    CPC分类号: H04N13/111 H04N2013/0081

    摘要: A virtual viewpoint synthesis method and system, including: establishing a left viewpoint virtual view and a right viewpoint virtual view; searching for a candidate pixel in a reference view, and marking a pixel block in which the candidate pixel is not found as a hole point; ranking the found candidate pixels according to depth, and successively calculating a foreground coefficient and a background coefficient for performing weighted summation; enlarging the hole-point regions of the left viewpoint virtual view and/or the right viewpoint virtual view in the direction of the background to remove a ghost pixel; performing viewpoint synthesis on the left viewpoint virtual view and the right viewpoint virtual view; and filling the hole-points of a composite image.

    摘要翻译: 一种虚拟视点合成方法和系统,包括:建立左视点虚拟视图和右视点虚拟视图; 在参考视图中搜索候选像素,并且将未找到候选像素的像素块标记为孔点; 根据深度对发现的候选像素进行排序,并连续计算用于执行加权求和的前景系数和背景系数; 在背景方向上放大左视点虚拟视图和/或右视点虚拟视图的空穴点区域以去除重像素; 在左视点虚拟视点和右视点虚拟视图上执行视点合成; 并填充合成图像的孔点。

    METHOD FOR DERIVING MOTION VECTOR, AND ELECTRONIC DEVICE

    公开(公告)号:US20220191503A1

    公开(公告)日:2022-06-16

    申请号:US17645698

    申请日:2021-12-22

    摘要: A method for deriving motion vector is provided, this method includes: obtaining a space domain motion vector prediction and a time domain motion vector prediction of adjacent blocks of a coding unit in a predetermined direction; performing a filtering operation on the space domain motion vector and the time domain motion vector prediction to obtain the space domain motion vector prediction and the time domain motion vector prediction of the filtered adjacent blocks; determining, according to a predetermined inter-frame prediction mode, reference motion vectors of a current block in four side directions by using the space domain motion vector prediction and the time domain motion vector prediction of the filtered adjacent blocks and a coordinate position of the current block in the coding unit; and deriving motion vectors of the current block according to the reference motion vectors and the coordinate position of the current block in the coding unit.