Method of Disparity Derived Depth Coding in 3D Video Coding
    2.
    发明申请
    Method of Disparity Derived Depth Coding in 3D Video Coding 审中-公开
    3D视频编码中的差距衍生深度编码方法

    公开(公告)号:US20160182883A1

    公开(公告)日:2016-06-23

    申请号:US14891129

    申请日:2014-07-02

    摘要: A method and apparatus for three-dimensional video encoding and decoding using disparity derived depth prediction are disclosed. Embodiments of the present invention determine a disparity vector related to a collocated texture block in the dependent view and generate converted depth samples from the disparity vector. The generated converted depth samples are used as a predictor or Merge candidate for the current depth block. The Merge candidate corresponding to the converted depth samples can be placed in the merging candidate list at a location before TMVP (temporal motion vector predictor) merging candidate. The converted depth samples can be generated from the disparity vector according to a function of the disparity vector. Information associated with the function can be signaled explicitly to a decoder or derived implicitly by the decoder. One aspect of the present invention addresses simplified disparity to depth conversion, specifically division-free disparity-to-depth conversion.

    摘要翻译: 公开了一种使用视差衍生深度预测进行三维视频编码和解码的方法和装置。 本发明的实施例确定从属视图中与并置纹理块有关的视差矢量,并从视差矢量生成转换的深度样本。 生成的转换深度样本用作当前深度块的预测或合并候选。 对应于转换的深度样本的合并候选可以在TMVP(时间运动矢量预测器)合并候选之前的位置放置在合并候选列表中。 可以根据视差向量的函数从视差向量生成转换的深度样本。 与该功能相关联的信息可以明确地发送给解码器或由解码器隐式地导出。 本发明的一个方面解决了对深度转换的简化视差,特别是无分辨率的视差到深度转换。

    Method for Depth Lookup Table Signaling in 3D Video Coding Based on High Efficiency Video Coding Standard
    3.
    发明申请
    Method for Depth Lookup Table Signaling in 3D Video Coding Based on High Efficiency Video Coding Standard 有权
    基于高效率视频编码标准的3D视频编码中深度查找表信令的方法

    公开(公告)号:US20170019682A1

    公开(公告)日:2017-01-19

    申请号:US15123882

    申请日:2015-03-17

    摘要: A method and apparatus for depth lookup table (DLT) signaling in a three-dimensional and multi-view coding system are disclosed. According to the present invention, if the pictures contain only texture data, no DLT information is incorporated in the picture parameter set (PPS) corresponding to the pictures. On the other hand, if the pictures contain depth data, the DLT associated with the pictures is determined. If a previous DLT required for predicting the DLT exists, the DLT will be predicted based on the previous DLT. Syntax related to the DLT is included in the PPS. Furthermore, first bit-depth information related to first depth samples of the DLT is also included in the PPS and the first bit-depth information is consistent with second bit-depth information signaled in a sequence level data for second depth samples of a sequence containing the pictures.

    摘要翻译: 公开了一种用于三维和多视图编码系统中的深度查找表(DLT)信令的方法和装置。 根据本发明,如果图像仅包含纹理数据,则不对应于图像的图像参数集(PPS)中并入DLT信息。 另一方面,如果图像包含深度数据,则确定与图像相关联的DLT。 如果存在用于预测DLT所需的先前DLT,则将基于先前的DLT预测DLT。 与DLT相关的语法包含在PPS中。 此外,与DLT的第一深度样本相关的第一位深度信息也包括在PPS中,并且第一位深度信息与第二深度信息一致,该第二位深度信息与序列级数据中信号的第二位深度信息一致, 图片。

    Method of Temporal Derived Bi-Directional Motion Vector for Motion Vector Prediction

    公开(公告)号:US20170171558A1

    公开(公告)日:2017-06-15

    申请号:US15323809

    申请日:2015-07-15

    IPC分类号: H04N19/577 H04N19/517

    摘要: A method and apparatus of deriving a temporal derived motion vector in a second direction based on a given motion vector in a first direction for motion vector prediction are disclosed. According to the present invention, a given motion vector for a current block is determined, where the given motion vector points from the current block in a first direction. A reference motion vector associated with a first reference block in a first reference frame is identified. Then, based on the reference motion vector and the given motion vector, a temporal derived motion vector is derived. The temporal derived motion vector points from the current block to a second reference block in a second reference frame in a second direction different from the first direction. The temporal derived motion vector is then used as one predictor for encoding or decoding of the motion vector of the current block.

    METHOD AND APPARATUS OF DISPARITY VECTOR DERIVATION IN THREE-DIMENSIONAL VIDEO CODING
    5.
    发明申请
    METHOD AND APPARATUS OF DISPARITY VECTOR DERIVATION IN THREE-DIMENSIONAL VIDEO CODING 审中-公开
    三维视频编码中差分矢量衍生的方法与装置

    公开(公告)号:US20150341664A1

    公开(公告)日:2015-11-26

    申请号:US14759042

    申请日:2013-12-13

    摘要: A derived disparity vector is determined based on spatial neighboring blocks and temporal neighboring blocks of the current block. The temporal neighboring blocks are searched according to a temporal search order and the temporal search order is the same for all dependent views. Any temporal neighboring block from a CTU below the current CTU row may be omitted in the temporal search order. The derived DV can also be used for predicting a DV of a DCP (disparity-compensated prediction) block for the current block in the AMVP mode, the Skip mode or the Merge mode. The temporal neighboring blocks may correspond to a temporal CT block and a temporal BR block. In one embodiment, the temporal search order checks the temporal BR block first and the temporal CT block next.

    摘要翻译: 基于当前块的空间相邻块和时间相邻块来确定派生视差向量。 根据时间搜索顺序搜索时间相邻块,并且所有相关视图的时间搜索顺序是相同的。 可以在时间搜索顺序中省略来自当前CTU行下方的CTU的任何临时相邻块。 导出的DV还可以用于在AMVP模式,跳过模式或合并模式中预测当前块的DCP(视差补偿预测)块的DV。 时间相邻块可以对应于时间CT块和时间BR块。 在一个实施例中,时间搜索顺序首先检查时间BR块,然后检查时间CT块。

    METHOD AND APPARATUS FOR RESIDUAL PREDICTION IN THREE-DIMENSIONAL VIDEO CODING
    6.
    发明申请
    METHOD AND APPARATUS FOR RESIDUAL PREDICTION IN THREE-DIMENSIONAL VIDEO CODING 有权
    三维视频编码中残留预测的方法与装置

    公开(公告)号:US20150341663A1

    公开(公告)日:2015-11-26

    申请号:US14442951

    申请日:2013-11-14

    摘要: A method and apparatus using pseudo residues to predict current residues for three-dimensional or multi-view video coding are disclosed. The method first receives input data associated with a current block of a current picture in a current dependent view and determines an inter-view reference block in a first inter-view reference picture in a reference view according to a DV (disparity vector), where the current picture and the first inter-view reference picture correspond to same time instance. Pseudo residues are then determined and used for encoding or decoding of the current block, where the pseudo residues correspond to differences between a corresponding region in an inter-time reference picture in the current dependent view and a pseudo reference region in a pseudo reference picture in the reference view, and where the inter-time reference picture and the pseudo reference picture correspond to same time instance.

    摘要翻译: 公开了一种使用伪残差来预测三维或多视点视频编码的当前残差的方法和装置。 该方法首先在当前依赖视图中接收与当前图像的当前块相关联的输入数据,并根据DV(视差向量)在参考视图中确定第一视景间参考图片中的视角间参考块,其中 当前图片和第一视景间参考图片对应于相同的时间实例。 然后确定伪残差并用于当前块的编码或解码,其中伪残差对应于当前依赖视图中的时间间参考图像中的对应区域与伪参考图像中的伪参考区域之间的差异 参考视图,以及时间间参考图片和伪参考图片对应于相同的时间实例。

    Method and Apparatus of Disparity Vector Derivation for Three- Dimensional Video Coding
    7.
    发明申请
    Method and Apparatus of Disparity Vector Derivation for Three- Dimensional Video Coding 有权
    三维视频编码的视差向量导出方法与装置

    公开(公告)号:US20160029045A1

    公开(公告)日:2016-01-28

    申请号:US14655973

    申请日:2014-04-10

    摘要: A method and apparatus of three-dimensional/multi-view coding using aligned reference information are disclosed. The present system aligns the reference information associated with the reference view of the derived DV with the reference information associated with a selected reference view by modifying the selected reference view or by modifying the derived DV or a converted DV derived from depth block pointed by the derived DV. The DV can be derived using the Neighboring Block Disparity Vector (NBDV) process. When the reference view of the derived DV is different from the selected reference view, the system scales the derived DV or changes the converted DV to refer to the selected reference view. The system may also set the selected reference view to the reference view of the derived DV.

    摘要翻译: 公开了使用对齐参考信息的三维/多视图编码的方法和装置。 本系统通过修改所选择的参考视图或通过修改导出的DV或从导出的指示的指示的深度块导出的转换的DV来将与导出的DV的参考视图相关联的参考信息与与所选择的参考视图相关联的参考信息对齐 DV。 DV可以使用相邻块视差向量(NBDV)进程导出。 当导出的DV的参考视图与所选择的参考视图不同时,系统缩放导出的DV或改变转换的DV以参考所选择的参考视图。 系统还可以将所选择的参考视图设置为导出的DV的参考视图。

    Method of Sub-Prediction Unit Inter-View Motion Prediction in 3D Video Coding
    8.
    发明申请
    Method of Sub-Prediction Unit Inter-View Motion Prediction in 3D Video Coding 审中-公开
    三维视频编码中子预测单元视角间运动预测的方法

    公开(公告)号:US20160134857A1

    公开(公告)日:2016-05-12

    申请号:US14891822

    申请日:2014-07-10

    摘要: A method for a three-dimensional encoding or decoding system incorporating sub-block based inter-view motion prediction is disclosed. The system utilizes motion or disparity parameters associated with reference sub-blocks in a reference picture of a reference view corresponding to the texture sub-PUs split from a current texture PU (prediction unit) to predict the motion or disparity parameters of the current texture PU. Candidate motion or disparity parameters for the current texture PU may comprise candidate motion or disparity parameters derived for all texture sub-PUs from splitting the current texture PU. The candidate motion or disparity parameters for the current texture PU can be used as a sub-block-based inter-view Merge candidate for the current texture PU in Merge mode. The sub-block-based inter-view Merge candidate can be inserted into a first position of a candidate list.

    摘要翻译: 公开了一种包含基于子块的视角间运动预测的三维编码或解码系统的方法。 该系统利用与从当前纹理PU(预测单元)分割的纹理子PU对应的参考视图的参考图像中的参考子块相关的运动或视差参数来预测当前纹理PU的运动或视差参数 。 用于当前纹理PU的候选运动或视差参数可以包括从分割当前纹理PU而为所有纹理子PU导出的候选运动或视差参数。 用于当前纹理PU的候选运动或视差参数可以用作合并模式中当前纹理PU的基于子块的视角合并候选。 可以将基于子块的视角合并候选者插入候选列表的第一位置。

    Method of Simplified CABAC Coding in 3D Video Coding
    9.
    发明申请
    Method of Simplified CABAC Coding in 3D Video Coding 有权
    3D视频编码中简化CABAC编码的方法

    公开(公告)号:US20160065964A1

    公开(公告)日:2016-03-03

    申请号:US14785011

    申请日:2014-06-24

    摘要: A method for reducing the storage requirement or complexity of context-based coding in three-dimensional or multi-view video encoding and decoding is disclosed. The system selects the context based on selected information associated with one or more neighboring blocks of the current block conditionally depending on whether the one or more neighboring blocks are available. The syntax element is then encoded or decoded using context-based coding according to the context selection. The syntax element to be coded may correspond to an IC (illumination compensation) flag or an ARP (advanced residual prediction) flag. In another example, one or more syntax elements for coding a current depth block using DMM (Depth Map Model) are encoded or decoded using context-based coding, where the context-based coding selects a by-pass mode for at least one selected syntax element.

    摘要翻译: 公开了一种用于减少三维或多视图视频编码和解码中的基于上下文的编码的存储要求或复杂性的方法。 该系统有条件地根据一个或多个相邻块是否可用,基于与当前块的一个或多个相邻块相关联的所选择的信息来选择上下文。 然后根据上下文选择使用基于上下文的编码对语法元素进行编码或解码。 要编码的语法元素可以对应于IC(照明补偿)标志或ARP(高级残差预测)标志。 在另一示例中,使用DMM(深度图模型)对当前深度块进行编码的一个或多个语法元素使用基于上下文的编码进行编码或解码,其中基于上下文的编码为至少一个选择的语法选择旁路模式 元件。

    METHOD AND APPARATUS OF LOCALIZED LUMA PREDICTION MODE INHERITANCE FOR CHROMA PREDICTION IN VIDEO CODING

    公开(公告)号:US20190068977A1

    公开(公告)日:2019-02-28

    申请号:US16078193

    申请日:2017-02-21

    摘要: A method and apparatus of Inter/Intra prediction for a chroma component performed by a video encoder or video decoder are disclosed. According to this method, a current chroma prediction block (e.g. a prediction unit, PU) is divided into multiple chroma prediction sub-blocks (e.g. sub-PUs). A corresponding luma prediction block is identified for each chroma prediction sub-block. A chroma prediction mode for each chroma prediction sub-block is determined from a luma prediction mode associated with the corresponding luma prediction block. A local chroma predictor for the current chroma prediction block is generated by applying a prediction process to the multiple chroma prediction sub-blocks using respective chroma prediction modes. In other words, the prediction process is applied at the chroma prediction sub-block level. After the local chroma predictor is derived, a coding block associated with the current chroma prediction block is encoded or decoded using information comprising the local chroma predictor.