Method for estimating a pose of an articulated object model
    1.
    发明授权
    Method for estimating a pose of an articulated object model 有权
    用于估计铰接对象模型的姿态的方法

    公开(公告)号:US08830236B2

    公开(公告)日:2014-09-09

    申请号:US13096488

    申请日:2011-04-28

    IPC分类号: G06T17/00

    摘要: A computer-implemented method for estimating a pose of an articulated object model that is a computer based 3D model of a real world object observed by one or more source cameras, including the steps of obtaining a source image from a video stream; processing the source image to extract a source image segment maintaining, in a database, a set of reference silhouettes, each being associated with an articulated object model and a corresponding reference pose; comparing the source image segment to the reference silhouettes and selecting reference silhouettes by taking into account, for each reference silhouette, a matching error that indicates how closely the reference silhouette matches the source image segment retrieving the corresponding reference poses of the articulated object models; and computing an estimate of the pose of the articulated object model from the reference poses of the selected reference silhouettes.

    摘要翻译: 一种用于估计作为由一个或多个源摄像机观测到的真实世界对象的基于计算机的3D模型的关节对象模型的姿态的计算机实现的方法,包括以下步骤:从视频流获取源图像; 处理源图像以提取源图像段,其在数据库中保持一组参考轮廓,每个参考轮廓与铰接对象模型和相应的参考姿势相关联; 将源图像段与参考轮廓进行比较,并且通过考虑每个参考轮廓来选择参考轮廓,该匹配误差指示参考轮廓与检索相关联的对象模型的相应参考姿势的源图像段匹配的距离; 以及从所选择的参考轮廓的参考姿势计算所述铰接对象模型的姿态的估计。

    Silhouette-based pose estimation
    2.
    发明申请
    Silhouette-based pose estimation 有权
    基于轮廓的姿态估计

    公开(公告)号:US20140219550A1

    公开(公告)日:2014-08-07

    申请号:US14117593

    申请日:2012-05-08

    IPC分类号: G06K9/00 G06T7/00

    摘要: Estimating a pose of an articulated 3D object model (4) by a computer is done by •obtaining a sequence of source images (10) and therefrom corresponding source image segments (13) with objects (14) separated from the image background; •matching such a sequence (51) with sequences (52) of reference silhouettes (13′), determining one or more selected sequences of reference silhouettes (13′) forming a best match; •for each of these selected sequences of reference silhouettes (13′), retrieving a reference pose that is associated with one of the reference silhouettes (13′); and •computing an estimate of the pose of the articulated object model (4) from the retrieved reference pose or poses. The result of these steps is an initial pose estimate, which then can be used in further steps, for example, for maintaining local consistency between pose estimates from consecutive frames, and global consistency over a longer sequence of frames.

    摘要翻译: 通过计算机估计关节式3D对象模型(4)的姿态是通过以下步骤完成的:通过从图像背景分离的对象(14)获得源图像序列(10)和源自相应源图像片段(13)的序列; •将这样的序列(51)与参考轮廓(13')的序列(52)匹配,确定形成最佳匹配的一个或多个所选择的参考轮廓序列(13'); •对于这些所选择的参考轮廓序列(13')中的每一个,检索与所述参考轮廓(13')之一相关联的参考姿势; 以及•从检索的参考姿势或姿势计算所述铰接对象模型(4)的姿态的估计。 这些步骤的结果是初始姿态估计,其然后可以用于进一步的步骤,例如,用于维持来自连续帧的姿态估计之间的局部一致性,以及在更长的帧序列上的全局一致性。

    Image processing method and device for instant replay
    4.
    发明授权
    Image processing method and device for instant replay 有权
    用于即时重放的图像处理方法和装置

    公开(公告)号:US08355083B2

    公开(公告)日:2013-01-15

    申请号:US13189136

    申请日:2011-07-22

    IPC分类号: H04N5/44

    摘要: What is disclosed is a computer-implemented image-processing system and method for the automatic generation of video sequences that can be associated with a televised event. The methods can include the steps of: Defining a reference keyframe from a reference view from a source image sequence; From one or more keyframes, automatically computing one or more sets of virtual camera parameters; Generating a virtual camera flight path, which is described by a change of virtual camera parameters over time, and which defines a movement of a virtual camera and a corresponding change of a virtual view; and Rendering and storing a virtual video stream defined by the virtual camera flight path.

    摘要翻译: 公开的是用于自动生成可与电视转播事件相关联的视频序列的计算机实现的图像处理系统和方法。 该方法可以包括以下步骤:从源图像序列从参考视图定义参考关键帧; 从一个或多个关键帧,自动计算一组或多组虚拟相机参数; 生成虚拟照相机飞行路径,其通过随时间变化的虚拟相机参数来描述,并且定义虚拟相机的移动和虚拟视图的相应变化; 并渲染并存储由虚拟照相机飞行路径定义的虚拟视频流。

    Silhouette-based pose estimation
    5.
    发明授权
    Silhouette-based pose estimation 有权
    基于轮廓的姿态估计

    公开(公告)号:US09117113B2

    公开(公告)日:2015-08-25

    申请号:US14117593

    申请日:2012-05-08

    IPC分类号: G06K9/00 G06T7/00

    摘要: Estimating a pose of an articulated 3D object model (4) by a computer is done by •obtaining a sequence of source images (10) and therefrom corresponding source image segments (13) with objects (14) separated from the image background; •matching such a sequence (51) with sequences (52) of reference silhouettes (13′), determining one or more selected sequences of reference silhouettes (13′) forming a best match; •for each of these selected sequences of reference silhouettes (13′), retrieving a reference pose that is associated with one of the reference silhouettes (13′); and •computing an estimate of the pose of the articulated object model (4) from the retrieved reference pose or poses. The result of these steps is an initial pose estimate, which then can be used in further steps, for example, for maintaining local consistency between pose estimates from consecutive frames, and global consistency over a longer sequence of frames.

    摘要翻译: 通过计算机估计关节式3D对象模型(4)的姿态是通过以下步骤完成的:通过从图像背景分离的对象(14)获得源图像序列(10)和源自相应源图像片段(13)的序列; •将这样的序列(51)与参考轮廓(13')的序列(52)匹配,确定形成最佳匹配的一个或多个所选择的参考轮廓序列(13'); •对于这些所选择的参考轮廓序列(13')中的每一个,检索与所述参考轮廓(13')之一相关联的参考姿势; 以及•从检索的参考姿势或姿势计算所述铰接对象模型(4)的姿态的估计。 这些步骤的结果是初始姿态估计,其然后可以用于进一步的步骤,例如,用于维持来自连续帧的姿态估计之间的局部一致性,以及在更长的帧序列上的全局一致性。

    Spatially adaptive photographic flash unit
    6.
    发明授权
    Spatially adaptive photographic flash unit 失效
    空间自适应摄影闪光灯

    公开(公告)号:US08218963B2

    公开(公告)日:2012-07-10

    申请号:US12936228

    申请日:2009-02-04

    IPC分类号: G03B15/03

    摘要: Using photographic flash for candid shots often results in an unevenly lit scene, in which objects in the back appear dark. A spatially adaptive photographic flash (100) is disclosed, in which the intensity of illumination (21, 23) varies depending on the depth and reflectivity (30, 101) of features in the scene. Adaption to changes in depth are used in a single-shot method. Adaption to changes in reflectivity are used in a multishot method. The single-shot method requires only a depth image (30), whereas the multi-shot method requires at least one color image (40) in addition to the depth data (30).

    摘要翻译: 使用摄影闪光灯进行坦白的拍摄通常会导致不均匀照明的场景,后者中的物体看起来很暗。 公开了一种空间自适应照相闪光灯(100),其中照明强度(21,23)根据场景中的特征的深度和反射率(30,101)而变化。 适用于深度变化的单次方法使用。 适用于反射率的变化在多重方法中使用。 单次拍摄方法仅需要深度图像(30),而多拍摄方法除了深度数据(30)之外还需要至少一个彩色图像(40)。

    IMAGE PROCESSING METHOD AND DEVICE FOR INSTANT REPLAY
    7.
    发明申请
    IMAGE PROCESSING METHOD AND DEVICE FOR INSTANT REPLAY 有权
    图像处理方法和即将更新的设备

    公开(公告)号:US20120188452A1

    公开(公告)日:2012-07-26

    申请号:US13189136

    申请日:2011-07-22

    IPC分类号: H04N5/44

    摘要: What is disclosed is a computer-implemented image-processing system and method for the automatic generation of video sequences that can be associated with a televised event. The methods can include the steps of: Defining a reference keyframe from a reference view from a source image sequence; From one or more keyframes, automatically computing one or more sets of virtual camera parameters; Generating a virtual camera flight path, which is described by a change of virtual camera parameters over time, and which defines a movement of a virtual camera and a corresponding change of a virtual view; and Rendering and storing a virtual video stream defined by the virtual camera flight path.

    摘要翻译: 公开的是用于自动生成可与电视转播事件相关联的视频序列的计算机实现的图像处理系统和方法。 该方法可以包括以下步骤:从源图像序列从参考视图定义参考关键帧; 从一个或多个关键帧,自动计算一组或多组虚拟相机参数; 生成虚拟照相机飞行路径,其通过随时间变化的虚拟相机参数来描述,并且定义虚拟相机的移动和虚拟视图的相应变化; 并渲染并存储由虚拟照相机飞行路径定义的虚拟视频流。

    Three-dimensional scene reconstruction from labeled two-dimensional images
    8.
    发明授权
    Three-dimensional scene reconstruction from labeled two-dimensional images 失效
    标记二维图像的三维场景重建

    公开(公告)号:US07142726B2

    公开(公告)日:2006-11-28

    申请号:US10391998

    申请日:2003-03-19

    IPC分类号: G06K9/00 G06K9/36 G06T17/00

    摘要: A method constructs three-dimensional (3D) models of a scene from a set of two-dimensional (2D) input images. The 3D model can then be used to reconstruct the scene from arbitrary viewpoints. A user segments and labels a set of corresponding polygonal regions in each image using conventional photo-editing tools. The invention constructs the model so that the model has a maximum volume that is consistent with the set of labeled regions in the input images. The method according to the invention directly constructs the polygonal model.

    摘要翻译: 一种方法从一组二维(2D)输入图像构建场景的三维(3D)模型。 然后可以使用3D模型从任意视点重建场景。 用户使用传统的照片编辑工具在每个图像中分割和标记一组相应的多边形区域。 本发明构建模型,使得模型具有与输入图像中的标记区域集合一致的最大体积。 根据本发明的方法直接构造多边形模型。

    METHOD FOR ESTIMATING A POSE OF AN ARTICULATED OBJECT MODEL
    9.
    发明申请
    METHOD FOR ESTIMATING A POSE OF AN ARTICULATED OBJECT MODEL 有权
    估计对象对象模型的方法

    公开(公告)号:US20110267344A1

    公开(公告)日:2011-11-03

    申请号:US13096488

    申请日:2011-04-28

    IPC分类号: G06K9/00 G06T17/00

    摘要: A computer-implemented method for estimating a pose of an articulated object model (4), wherein the articulated object model (4) is a computer based 3D model (1) of a real world object (14) observed by one or more source cameras (9), and wherein the pose of the articulated object model (4) is defined by the spatial location of joints (2) of the articulated object model (4), comprises the steps of obtaining a source image (10) from a video stream; processing the source image (10) to extract a source image segment (13); maintaining, in a database, a set of reference silhouettes, each being associated with an articulated object model (4) and a corresponding reference pose; comparing the source image segment (13) to the reference silhouettes and selecting reference silhouettes by taking into account, for each reference silhouette, a matching error that indicates how closely the reference silhouette matches the source image segment (13) and/or a coherence error that indicates how much the reference pose is consistent with the pose of the same real world object (14) as estimated from a preceding source image (10); retrieving the corresponding reference poses of the articulated object models (4); and computing an estimate of the pose of the articulated object model (4) from the reference poses of the selected reference silhouettes.

    摘要翻译: 一种用于估计铰接对象模型(4)的姿态的计算机实现的方法,其中所述铰接对象模型(4)是由一个或多个源摄像机观察到的真实世界对象(14)的基于计算机的3D模型(1) (9),并且其中所述铰接对象模型(4)的姿态由所述铰接对象模型(4)的关节(2)的空间位置定义,包括以下步骤:从视频获得源图像(10) 流; 处理源图像(10)以提取源图像段(13); 在数据库中维护一组参考轮廓,每个参考轮廓与铰接对象模型(4)和相应的参考姿势相关联; 将源图像段(13)与参考轮廓进行比较,并且通过考虑每个参考轮廓来选择参考轮廓,该匹配误差指示参考轮廓与源图像段(13)的匹配和/或相干误差 其指示参考姿势与从先前的源图像(10)估计的相同的真实世界对象(14)的姿态一致; 检索关节对象模型(4)的相应参考姿势; 以及从所选择的参考轮廓的参考姿势中计算所述铰接对象模型(4)的姿态的估计。

    SPATIALLY ADAPTIVE PHOTOGRAPHIC FLASH UNIT
    10.
    发明申请
    SPATIALLY ADAPTIVE PHOTOGRAPHIC FLASH UNIT 失效
    空间自适应摄影闪光灯

    公开(公告)号:US20110123183A1

    公开(公告)日:2011-05-26

    申请号:US12936228

    申请日:2009-02-04

    IPC分类号: G03B15/03

    摘要: Using photographic flash for candid shots often results in an unevenly lit scene, in which objects in the back appear dark. A spatially adaptive photographic flash (100) is disclosed, in which the intensity of illumination (21, 23) varies depending on the depth and reflectivity (30, 101) of features in the scene. Adaption to changes in depth are used in a single-shot method. Adaption to changes in reflectivity are used in a multishot method. The single-shot method requires only a depth image (30), whereas the multi-shot method requires at least one color image (40) in addition to the depth data (30).

    摘要翻译: 使用摄影闪光灯进行坦白的拍摄通常会导致不均匀照明的场景,后者中的物体看起来很暗。 公开了一种空间自适应照相闪光灯(100),其中照明强度(21,23)根据场景中的特征的深度和反射率(30,101)而变化。 适用于深度变化的单次方法使用。 适用于反射率的变化在多重方法中使用。 单次拍摄方法仅需要深度图像(30),而多拍摄方法除了深度数据(30)之外还需要至少一个彩色图像(40)。