Foveated wide-angle imaging system and method for capturing and viewing wide-angle images in real time
    21.
    发明授权
    Foveated wide-angle imaging system and method for capturing and viewing wide-angle images in real time 失效
    移动广角成像系统和实时拍摄和观看广角图像的方法

    公开(公告)号:US07084904B2

    公开(公告)日:2006-08-01

    申请号:US10262292

    申请日:2002-09-30

    IPC分类号: H04N5/225 H04N7/00 H04N13/02

    摘要: The present invention includes a foveated wide-angle imaging system and method for capturing a wide-angle image and for viewing the captured wide-angle image in real time. In general, the foveated wide-angle imaging system includes a foveated wide-angle camera system having multiple cameras for capturing a scene and outputting raw output images, a foveated wide-angle stitching system for generating a stitch table, and a real-time wide-angle image correction system that creates a composed warp table from the stitch table and processes the raw output images using the composed warp table to correct distortion and perception problems. The foveated wide-angle imaging method includes using a foveated wide-angle camera system to capture a plurality of raw output images, generating a composed warp table, and processing the plurality of raw output images using the composed warp table to generate a corrected wide-angle image for viewing.

    摘要翻译: 本发明包括一种用于拍摄广角图像并实时观看拍摄的广角图像的移动广角成像系统和方法。 通常,移动广角成像系统包括具有用于捕捉场景并输出原始输出图像的多个相机的移动广角相机系统,用于生成针迹表的移动广角拼接系统和实时宽 角图像校正系统,其从针迹表创建组合的经线表,并使用组合的经线表处理原始输出图像以校正失真和感知问题。 移动广角成像方法包括使用移动广角相机系统来捕获多个原始输出图像,生成组合的经线表,以及使用组合的经线表处理多个原始输出图像, 角度图像进行观看。

    Real-time wide-angle image correction system and method for computer image viewing
    22.
    发明授权
    Real-time wide-angle image correction system and method for computer image viewing 有权
    实时广角图像校正系统及计算机图像查看方法

    公开(公告)号:US07058237B2

    公开(公告)日:2006-06-06

    申请号:US10186915

    申请日:2002-06-28

    IPC分类号: G06K9/36 G09G5/00 H04N5/225

    摘要: The present invention includes a real-time wide-angle image correction system and a method for alleviating distortion and perception problems in images captured by wide-angle cameras. In general, the real-time wide-angle image correction method generates warp table from pixel coordinates of a wide-angle image and applies the warp table to the wide-angle image to create a corrected wide-angle image. The corrections are performed using a parametric class of warping functions that include Spatially Varying Uniform (SVU) scaling functions. The SVU scaling functions and scaling factors are used to perform vertical scaling and horizontal scaling on the wide-angle image pixel coordinates. A horizontal distortion correction is performed using the SVU scaling functions at and at least two different scaling factors. This processing generates a warp table that can be applied to the wide-angle image to yield the corrected wide-angle image.

    摘要翻译: 本发明包括一种实时广角图像校正系统和一种减轻由广角摄像机拍摄的图像中的失真和感知问题的方法。 通常,实时广角图像校正方法从广角图像的像素坐标生成变形表,并将经线表应用于广角图像,以产生校正的广角图像。 校正使用包括空间变化均匀(SVU)缩放函数的参数化的翘曲函数类来执行。 SVU缩放函数和缩放因子用于对广角图像像素坐标执行垂直缩放和水平缩放。 使用SVU缩放函数和至少两个不同的缩放因子执行水平失真校正。 该处理产生可应用于广角图像以产生校正广角图像的经线表。

    System and method providing improved head motion estimations for animation

    公开(公告)号:US07020305B2

    公开(公告)日:2006-03-28

    申请号:US09731481

    申请日:2000-12-06

    IPC分类号: G06K9/00

    摘要: The system provides improved procedures to estimate head motion between two images of a face. Locations of a number of distinct facial features are identified in two images. The identified locations can correspond to the eye comers, mouth corners and nose tip. The locations are converted into as a set of physical face parameters based on the symmetry of the identified distinct facial features. The set of physical parameters reduces the number of unknowns as compared to the number of equations used to determine the unknowns. An initial head motion estimate is determined by: (a) estimating each of the set of physical parameters, (b) estimating a first head pose transform corresponding to the first image, and (c) estimating a second head pose transform corresponding to the second image. The head motion estimate can be incorporated into a feature matching algorithm to refine the head motion estimation and the physical facial parameters. In one implementation, an inequality constraint is placed on a particular physical parameter—such as a nose tip, such that the parameter is constrained within a predetermined minimum and maximum value. The inequality constraint is converted to an equality constraint by using a penalty function. Then, the inequality constraint is used during the initial head motion estimation to add additional robustness to the motion estimation.

    Real-time wide-angle image correction system and method for computer image viewing

    公开(公告)号:US20060033999A1

    公开(公告)日:2006-02-16

    申请号:US11202298

    申请日:2005-08-11

    IPC分类号: G02B15/14

    摘要: The present invention includes a real-time wide-angle image correction system and a method for alleviating distortion and perception problems in images captured by wide-angle cameras. In general, the real-time wide-angle image correction method generates warp table from pixel coordinates of a wide-angle image and applies the warp table to the wide-angle image to create a corrected wide-angle image. The corrections are performed using a parametric class of warping functions that include Spatially Varying Uniform (SVU) scaling functions. The SVU scaling functions and scaling factors are used to perform vertical scaling and horizontal scaling on the wide-angle image pixel coordinates. A horizontal distortion correction is performed using the SVU scaling functions at and at least two different scaling factors. This processing generates a warp table that can be applied to the wide-angle image to yield the corrected wide-angle image.

    System and method providing improved head motion estimations for animation
    26.
    发明申请
    System and method providing improved head motion estimations for animation 失效
    系统和方法为动画提供改进的头部运动估计

    公开(公告)号:US20050074145A1

    公开(公告)日:2005-04-07

    申请号:US11000603

    申请日:2004-12-01

    摘要: Systems and methods to estimate head motion between two images of a face are described. In one aspect, locations of a plurality of distinct facial features in the two images are identified. The locations correspond to a number of unknowns that are determined upon estimation of head motion. The number of unknowns are determined by a number of equations. The identified locations are converted into a set of physical face parameters based on the symmetry of the distinct facial features. The set of physical face parameters reduce the number of unknowns as compared to the number of equations used to determine the unknowns. An inequality constraint is added to a particular face parameter of the physical face parameters, such that the particular face parameter is constrained within a predetermined minimum and maximum value. The inequality constraint is converted to an equality constraint using a penalty function. Head motion is estimated from identified points in the two images. The identified points are based on the set of physical face parameters.

    摘要翻译: 描述用于估计面部的两个图像之间的头部运动的系统和方法。 在一个方面,识别两个图像中的多个不同的面部特征的位置。 这些位置对应于在估计头部运动时确定的许多未知数。 未知数的数量由多个等式确定。 基于不同的面部特征的对称性,将识别的位置转换成一组物理面部参数。 与用于确定未知数的等式的数量相比,物理面参数的集合减少未知数的数量。 将不等式约束添加到物理面部参数的特定面部参数,使得特定面部参数被限制在预定的最小值和最大值内。 不等式约束使用惩罚函数转换为等式约束。 从两个图像中的识别点估计头部运动。 识别的点是基于物理面参数的集合。

    Immersive remote conferencing
    29.
    发明授权
    Immersive remote conferencing 有权
    沉浸式远程会议

    公开(公告)号:US08675067B2

    公开(公告)日:2014-03-18

    申请号:US13100504

    申请日:2011-05-04

    IPC分类号: H04N7/18

    摘要: The subject disclosure is directed towards an immersive conference, in which participants in separate locations are brought together into a common virtual environment (scene), such that they appear to each other to be in a common space, with geometry, appearance, and real-time natural interaction (e.g., gestures) preserved. In one aspect, depth data and video data are processed to place remote participants in the common scene from the first person point of view of a local participant. Sound data may be spatially controlled, and parallax computed to provide a realistic experience. The scene may be augmented with various data, videos and other effects/animations.

    摘要翻译: 本发明涉及一种身临其境的会议,其中分开的位置的参与者被聚集到一个共同的虚拟环境(场景)中,使得它们彼此看起来处于共同的空间中,具有几何,外观和实时性, 保留时间自然的相互作用(如手势)。 在一个方面,深度数据和视频数据被处理以将远程参与者从本地参与者的第一人的角度放置在公共场景中。 声音数据可以是空间控制的,并且计算视差以提供真实的体验。 场景可能会增加各种数据,视频和其他效果/动画。

    Multi-device capture and spatial browsing of conferences
    30.
    发明授权
    Multi-device capture and spatial browsing of conferences 有权
    会议的多设备捕获和空间浏览

    公开(公告)号:US08537196B2

    公开(公告)日:2013-09-17

    申请号:US12245774

    申请日:2008-10-06

    IPC分类号: H04N7/14 G06F15/16 G06F3/48

    CPC分类号: H04N7/157 H04N7/147

    摘要: Multi-device capture and spatial browsing of conferences is described. In one implementation, a system detects cameras and microphones, such as the webcams on participants' notebook computers, in a conference room, group meeting, or table game, and enlists an ad-hoc array of available devices to capture each participant and the spatial relationships between participants. A video stream composited from the array is browsable by a user to navigate a 3-dimensional representation of the meeting. Each participant may be represented by a video pane, a foreground object, or a 3-D geometric model of the participant's face or body displayed in spatial relation to the other participants in a 3-dimensional arrangement analogous to the spatial arrangement of the meeting. The system may automatically re-orient the 3-dimensional representation as needed to best show the currently interesting event such as current speaker or may extend navigation controls to a user for manually viewing selected participants or nuanced interactions between participants.

    摘要翻译: 描述会议的多设备捕获和空间浏览。 在一个实现中,系统检测相机和麦克风,例如参与者的笔记本计算机上的网络摄像机,会议室,组会议或桌面游戏,并且招募可用设备的特设阵列以捕获每个参与者和空间 参与者之间的关系。 从阵列合成的视频流可由用户浏览以浏览会议的三维表示。 每个参与者可以以类似于会议的空间安排的三维布置的视频窗格,前景对象或与其他参与者以空间关系显示的三维几何模型来表示。 该系统可以根据需要自动重新定向三维表示,以最佳地显示当前有趣的事件,例如当前的扬声器,或者可以将导航控件扩展到用户,以便手动地观看选定的参与者或参与者之间微妙的交互。