Method for encoding and decoding free viewpoint videos
    1.
    发明授权
    Method for encoding and decoding free viewpoint videos 失效
    免费观看视频的编码和解码方法

    公开(公告)号:US07324594B2

    公开(公告)日:2008-01-29

    申请号:US10723035

    申请日:2003-11-26

    IPC分类号: H04N7/12

    摘要: A system encodes videos acquired of a moving object in a scene by multiple fixed cameras. Camera calibration data of each camera are first determined. The camera calibration data of each camera are associated with the corresponding video. A segmentation mask for each frame of each video is determined. The segmentation mask identifies only foreground pixels in the frame associated with the object. A shape encoder then encodes the segmentation masks, a position encoder encodes a position of each pixel, and a color encoder encodes a color of each pixel. The encoded data can be combined into a single bitstream and transferred to a decoder. At the decoder, the bitstream is decoded to an output video having an arbitrary user selected viewpoint. A dynamic 3D point model defines a geometry of the moving object. Splat sizes and surface normals used during the rendering can be explicitly determined by the encoder, or explicitly by the decoder.

    摘要翻译: 系统通过多个固定摄像机对场景中的移动物体所获取的视频进行编码。 首先确定每个摄像机的摄像机校准数据。 每个摄像机的摄像机校准数据与相应的视频相关联。 确定每个视频的每个帧的分割掩码。 分割掩码仅识别与对象相关联的帧中的前景像素。 然后,形状编码器对分割掩模进行编码,位置编码器对每个像素的位置进行编码,并且颜色编码器对每个像素的颜色进行编码。 编码数据可以组合成单个比特流并传送到解码器。 在解码器处,比特流被解码为具有任意用户选择的视点的输出视频。 动态3D点模型定义了移动物体的几何形状。 在渲染期间使用的Splat尺寸和表面法线可以由编码器显式确定,或者由解码器显式确定。

    System and method for producing 3D video images
    2.
    发明授权
    System and method for producing 3D video images 失效
    用于制作3D视频图像的系统和方法

    公开(公告)号:US07034822B2

    公开(公告)日:2006-04-25

    申请号:US10464364

    申请日:2003-06-18

    IPC分类号: G06T15/00

    摘要: A method and system generates 3D video images from point samples obtained from primary video data in a 3D coordinate system. Each point sample contains 3D coordinates in a 3D coordinate system, as well as colour and/or intensity information. On subsequently rendering, the point samples are modified continuously according to an updating of the 3D primary video data. The point samples are arranged in a hierarchic data structure in a manner such that each point sample is an end point, or leaf node, in a hierarchical tree, wherein the branch points in the hierarchy tree are average values of the nodes lower in the hierarchy of the tree.

    摘要翻译: 方法和系统从3D坐标系中的主要视频数据获得的点样本生成3D视频图像。 每个点样本都包含3D坐标系中的3D坐标,以及颜色和/或强度信息。 随后渲染,根据3D主要视频数据的更新连续修改点样本。 点样本以分层树中的每个点样本是终点或叶节点的方式排列在分层数据结构中,其中层次结构树中的分支点是层次结构中较低的节点的平均值 的树。

    Method and system for generating a 3D representation of a dynamically changing 3D scene
    3.
    发明授权
    Method and system for generating a 3D representation of a dynamically changing 3D scene 有权
    用于生成动态变化的3D场景的3D表示的方法和系统

    公开(公告)号:US09406131B2

    公开(公告)日:2016-08-02

    申请号:US12302928

    申请日:2007-05-24

    摘要: A method for generating a 3D representation of a dynamically changing 3D scene, which includes the steps of: acquiring at least two synchronised video streams (120) from at least two cameras located at different locations and observing the same 3D scene (102); determining camera parameters, which comprise the orientation and zoom setting, for the at least two cameras (103); tracking the movement of objects (310a,b, 312a,b; 330a,b, 331a,b, 332a,b; 410a,b, 411a,b; 430a,b, 431a,b; 420a,b, 421a,b) in the at least two video streams (104); determining the identity of the objects in the at least two video streams (105); determining the 3D position of the objects by combining the information from the at least two video streams (106); wherein the step of tracking (104) the movement of objects in the at least two video streams uses position information derived from the 3D position of the objects in one or more earlier instants in time. As a result, the quality, speed and robustness of the 2D tracking in the video streams is improved.

    摘要翻译: 一种用于生成动态变化的3D场景的3D表示的方法,其包括以下步骤:从位于不同位置的至少两个摄像机获取至少两个同步的视频流(120)并观察相同的3D场景(102); 确定包括所述至少两个照相机(103)的取向和变焦设置的相机参数; 跟踪物体(310a,b,312a,b; 330a,b,331a,b,332a,b; 410a,b,411a,b; 410a,b,411a,b; 430a,b,431a,b; 420a,b, 在所述至少两个视频流(104)中; 确定所述至少两个视频流(105)中的对象的身份; 通过组合来自至少两个视频流(106)的信息来确定对象的3D位置; 其中跟踪(104)所述至少两个视频流中的对象的移动的步骤使用从一个或多个先前时刻的对象的3D位置导出的位置信息。 结果,提高了视频流中2D跟踪的质量,速度和鲁棒性。

    METHOD AND SYSTEM FOR GENERATING A 3D REPRESENTATION OF A DYNAMICALLY CHANGING 3D SCENE
    4.
    发明申请
    METHOD AND SYSTEM FOR GENERATING A 3D REPRESENTATION OF A DYNAMICALLY CHANGING 3D SCENE 有权
    用于生成动态变化3D场景的三维表示的方法和系统

    公开(公告)号:US20090315978A1

    公开(公告)日:2009-12-24

    申请号:US12302928

    申请日:2007-05-24

    IPC分类号: H04N13/00 H04N5/225

    摘要: A method for generating a 3D representation of a dynamically changing 3D scene, which includes the steps of: acquiring at least two synchronised video streams (120) from at least two cameras located at different locations and observing the same 3D scene (102); determining camera parameters, which comprise the orientation and zoom setting, for the at least two cameras (103); tracking the movement of objects (310a,b, 312a,b; 330a,b, 331 a,b, 332a,b; 410a,b, 411a,b; 430a,b, 431a,b; 420a,b, 421 a,b) in the at least two video streams (104); determining the identity of the objects in the at least two video streams (105); determining the 3D position of the objects by combining the information from the at least two video streams (106); wherein the step of tracking (104) the movement of objects in the at least two video streams uses position information derived from the 3D position of the objects in one or more earlier instants in time. As a result, the quality, speed and robustness of the 2D tracking in the video streams is improved.

    摘要翻译: 一种用于生成动态变化的3D场景的3D表示的方法,其包括以下步骤:从位于不同位置的至少两个摄像机获取至少两个同步的视频流(120)并观察相同的3D场景(102); 确定包括所述至少两个照相机(103)的取向和变焦设置的相机参数; 跟踪物体(310a,b,312a,b; 330a,b,331a,b,332a,b; 410a,b,411a,b; 410a,b,411a,b; 430a,b,431a,b; 420a, b)在所述至少两个视频流(104)中; 确定所述至少两个视频流(105)中的对象的身份; 通过组合来自至少两个视频流(106)的信息来确定对象的3D位置; 其中跟踪(104)所述至少两个视频流中的对象的移动的步骤使用从一个或多个先前时刻的对象的3D位置导出的位置信息。 结果,提高了视频流中2D跟踪的质量,速度和鲁棒性。