Method of Disparity Derived Depth Coding in 3D Video Coding
    2.
    发明申请
    Method of Disparity Derived Depth Coding in 3D Video Coding 审中-公开
    3D视频编码中的差距衍生深度编码方法

    公开(公告)号:US20160182883A1

    公开(公告)日:2016-06-23

    申请号:US14891129

    申请日:2014-07-02

    摘要: A method and apparatus for three-dimensional video encoding and decoding using disparity derived depth prediction are disclosed. Embodiments of the present invention determine a disparity vector related to a collocated texture block in the dependent view and generate converted depth samples from the disparity vector. The generated converted depth samples are used as a predictor or Merge candidate for the current depth block. The Merge candidate corresponding to the converted depth samples can be placed in the merging candidate list at a location before TMVP (temporal motion vector predictor) merging candidate. The converted depth samples can be generated from the disparity vector according to a function of the disparity vector. Information associated with the function can be signaled explicitly to a decoder or derived implicitly by the decoder. One aspect of the present invention addresses simplified disparity to depth conversion, specifically division-free disparity-to-depth conversion.

    摘要翻译: 公开了一种使用视差衍生深度预测进行三维视频编码和解码的方法和装置。 本发明的实施例确定从属视图中与并置纹理块有关的视差矢量,并从视差矢量生成转换的深度样本。 生成的转换深度样本用作当前深度块的预测或合并候选。 对应于转换的深度样本的合并候选可以在TMVP(时间运动矢量预测器)合并候选之前的位置放置在合并候选列表中。 可以根据视差向量的函数从视差向量生成转换的深度样本。 与该功能相关联的信息可以明确地发送给解码器或由解码器隐式地导出。 本发明的一个方面解决了对深度转换的简化视差,特别是无分辨率的视差到深度转换。

    METHOD AND APPARATUS OF DISPARITY VECTOR DERIVATION IN THREE-DIMENSIONAL VIDEO CODING
    3.
    发明申请
    METHOD AND APPARATUS OF DISPARITY VECTOR DERIVATION IN THREE-DIMENSIONAL VIDEO CODING 审中-公开
    三维视频编码中差分矢量衍生的方法与装置

    公开(公告)号:US20150341664A1

    公开(公告)日:2015-11-26

    申请号:US14759042

    申请日:2013-12-13

    摘要: A derived disparity vector is determined based on spatial neighboring blocks and temporal neighboring blocks of the current block. The temporal neighboring blocks are searched according to a temporal search order and the temporal search order is the same for all dependent views. Any temporal neighboring block from a CTU below the current CTU row may be omitted in the temporal search order. The derived DV can also be used for predicting a DV of a DCP (disparity-compensated prediction) block for the current block in the AMVP mode, the Skip mode or the Merge mode. The temporal neighboring blocks may correspond to a temporal CT block and a temporal BR block. In one embodiment, the temporal search order checks the temporal BR block first and the temporal CT block next.

    摘要翻译: 基于当前块的空间相邻块和时间相邻块来确定派生视差向量。 根据时间搜索顺序搜索时间相邻块,并且所有相关视图的时间搜索顺序是相同的。 可以在时间搜索顺序中省略来自当前CTU行下方的CTU的任何临时相邻块。 导出的DV还可以用于在AMVP模式,跳过模式或合并模式中预测当前块的DCP(视差补偿预测)块的DV。 时间相邻块可以对应于时间CT块和时间BR块。 在一个实施例中,时间搜索顺序首先检查时间BR块,然后检查时间CT块。

    METHOD AND APPARATUS FOR RESIDUAL PREDICTION IN THREE-DIMENSIONAL VIDEO CODING
    4.
    发明申请
    METHOD AND APPARATUS FOR RESIDUAL PREDICTION IN THREE-DIMENSIONAL VIDEO CODING 有权
    三维视频编码中残留预测的方法与装置

    公开(公告)号:US20150341663A1

    公开(公告)日:2015-11-26

    申请号:US14442951

    申请日:2013-11-14

    摘要: A method and apparatus using pseudo residues to predict current residues for three-dimensional or multi-view video coding are disclosed. The method first receives input data associated with a current block of a current picture in a current dependent view and determines an inter-view reference block in a first inter-view reference picture in a reference view according to a DV (disparity vector), where the current picture and the first inter-view reference picture correspond to same time instance. Pseudo residues are then determined and used for encoding or decoding of the current block, where the pseudo residues correspond to differences between a corresponding region in an inter-time reference picture in the current dependent view and a pseudo reference region in a pseudo reference picture in the reference view, and where the inter-time reference picture and the pseudo reference picture correspond to same time instance.

    摘要翻译: 公开了一种使用伪残差来预测三维或多视点视频编码的当前残差的方法和装置。 该方法首先在当前依赖视图中接收与当前图像的当前块相关联的输入数据,并根据DV(视差向量)在参考视图中确定第一视景间参考图片中的视角间参考块,其中 当前图片和第一视景间参考图片对应于相同的时间实例。 然后确定伪残差并用于当前块的编码或解码,其中伪残差对应于当前依赖视图中的时间间参考图像中的对应区域与伪参考图像中的伪参考区域之间的差异 参考视图,以及时间间参考图片和伪参考图片对应于相同的时间实例。

    Method of Error-Resilient Illumination Compensation for Three- Dimensional Video Coding
    5.
    发明申请
    Method of Error-Resilient Illumination Compensation for Three- Dimensional Video Coding 审中-公开
    三维视频编码的灵敏度补偿补偿方法

    公开(公告)号:US20160021393A1

    公开(公告)日:2016-01-21

    申请号:US14762508

    申请日:2014-04-03

    摘要: A method of illumination compensation for three-dimensional or multi-view encoding and decoding. The method incorporates an illumination compensation flag only if the illumination compensation is enabled and the current coding unit is processed by one 2N×2N prediction unit. The illumination compensation is applied to the current coding unit according to the illumination compensation flag. The illumination compensation flag is incorporated when the current coding unit is coded in Merge mode without checking whether a current reference picture is an inter-view reference picture.

    摘要翻译: 一种用于三维或多视图编码和解码的照明补偿方法。 该方法仅在照明补偿被使能并且当前编码单元被一个2N×2N个预测单元处理时才包含照明补偿标志。 根据照明补偿标志将照明补偿应用于当前编码单元。 当当前编码单元以合并模式编码而不检查当前参考图片是否是视频间参考图片时,结合照明补偿标志。

    Method of Sub-Prediction Unit Inter-View Motion Prediction in 3D Video Coding
    6.
    发明申请
    Method of Sub-Prediction Unit Inter-View Motion Prediction in 3D Video Coding 审中-公开
    三维视频编码中子预测单元视角间运动预测的方法

    公开(公告)号:US20160134857A1

    公开(公告)日:2016-05-12

    申请号:US14891822

    申请日:2014-07-10

    摘要: A method for a three-dimensional encoding or decoding system incorporating sub-block based inter-view motion prediction is disclosed. The system utilizes motion or disparity parameters associated with reference sub-blocks in a reference picture of a reference view corresponding to the texture sub-PUs split from a current texture PU (prediction unit) to predict the motion or disparity parameters of the current texture PU. Candidate motion or disparity parameters for the current texture PU may comprise candidate motion or disparity parameters derived for all texture sub-PUs from splitting the current texture PU. The candidate motion or disparity parameters for the current texture PU can be used as a sub-block-based inter-view Merge candidate for the current texture PU in Merge mode. The sub-block-based inter-view Merge candidate can be inserted into a first position of a candidate list.

    摘要翻译: 公开了一种包含基于子块的视角间运动预测的三维编码或解码系统的方法。 该系统利用与从当前纹理PU(预测单元)分割的纹理子PU对应的参考视图的参考图像中的参考子块相关的运动或视差参数来预测当前纹理PU的运动或视差参数 。 用于当前纹理PU的候选运动或视差参数可以包括从分割当前纹理PU而为所有纹理子PU导出的候选运动或视差参数。 用于当前纹理PU的候选运动或视差参数可以用作合并模式中当前纹理PU的基于子块的视角合并候选。 可以将基于子块的视角合并候选者插入候选列表的第一位置。

    Method and Apparatus of Disparity Vector Derivation for Three- Dimensional Video Coding
    7.
    发明申请
    Method and Apparatus of Disparity Vector Derivation for Three- Dimensional Video Coding 有权
    三维视频编码的视差向量导出方法与装置

    公开(公告)号:US20160029045A1

    公开(公告)日:2016-01-28

    申请号:US14655973

    申请日:2014-04-10

    摘要: A method and apparatus of three-dimensional/multi-view coding using aligned reference information are disclosed. The present system aligns the reference information associated with the reference view of the derived DV with the reference information associated with a selected reference view by modifying the selected reference view or by modifying the derived DV or a converted DV derived from depth block pointed by the derived DV. The DV can be derived using the Neighboring Block Disparity Vector (NBDV) process. When the reference view of the derived DV is different from the selected reference view, the system scales the derived DV or changes the converted DV to refer to the selected reference view. The system may also set the selected reference view to the reference view of the derived DV.

    摘要翻译: 公开了使用对齐参考信息的三维/多视图编码的方法和装置。 本系统通过修改所选择的参考视图或通过修改导出的DV或从导出的指示的指示的深度块导出的转换的DV来将与导出的DV的参考视图相关联的参考信息与与所选择的参考视图相关联的参考信息对齐 DV。 DV可以使用相邻块视差向量(NBDV)进程导出。 当导出的DV的参考视图与所选择的参考视图不同时,系统缩放导出的DV或改变转换的DV以参考所选择的参考视图。 系统还可以将所选择的参考视图设置为导出的DV的参考视图。

    Method of Simplified CABAC Coding in 3D Video Coding
    8.
    发明申请
    Method of Simplified CABAC Coding in 3D Video Coding 有权
    3D视频编码中简化CABAC编码的方法

    公开(公告)号:US20160065964A1

    公开(公告)日:2016-03-03

    申请号:US14785011

    申请日:2014-06-24

    摘要: A method for reducing the storage requirement or complexity of context-based coding in three-dimensional or multi-view video encoding and decoding is disclosed. The system selects the context based on selected information associated with one or more neighboring blocks of the current block conditionally depending on whether the one or more neighboring blocks are available. The syntax element is then encoded or decoded using context-based coding according to the context selection. The syntax element to be coded may correspond to an IC (illumination compensation) flag or an ARP (advanced residual prediction) flag. In another example, one or more syntax elements for coding a current depth block using DMM (Depth Map Model) are encoded or decoded using context-based coding, where the context-based coding selects a by-pass mode for at least one selected syntax element.

    摘要翻译: 公开了一种用于减少三维或多视图视频编码和解码中的基于上下文的编码的存储要求或复杂性的方法。 该系统有条件地根据一个或多个相邻块是否可用,基于与当前块的一个或多个相邻块相关联的所选择的信息来选择上下文。 然后根据上下文选择使用基于上下文的编码对语法元素进行编码或解码。 要编码的语法元素可以对应于IC(照明补偿)标志或ARP(高级残差预测)标志。 在另一示例中,使用DMM(深度图模型)对当前深度块进行编码的一个或多个语法元素使用基于上下文的编码进行编码或解码,其中基于上下文的编码为至少一个选择的语法选择旁路模式 元件。