-
公开(公告)号:US20130321396A1
公开(公告)日:2013-12-05
申请号:US13599170
申请日:2012-08-30
申请人: Adam Kirk , Kanchan Mitra , Patrick Sweeney , Don Gillett , Neil Fishman , Simon Winder , Yaron Eshet , David Harnett , Amit Mital , David Eraker
发明人: Adam Kirk , Kanchan Mitra , Patrick Sweeney , Don Gillett , Neil Fishman , Simon Winder , Yaron Eshet , David Harnett , Amit Mital , David Eraker
IPC分类号: G06T15/00
CPC分类号: G06T15/04 , G06T15/08 , G06T15/205 , G06T17/00 , G06T2210/56 , H04N7/142 , H04N7/15 , H04N7/157 , H04N13/117 , H04N13/194 , H04N13/239 , H04N13/243 , H04N13/246 , H04N13/257 , H04R2227/005 , H04S2400/15
摘要: Free viewpoint video of a scene is generated and presented to a user. An arrangement of sensors generates streams of sensor data each of which represents the scene from a different geometric perspective. The sensor data streams are calibrated. A scene proxy is generated from the calibrated sensor data streams. The scene proxy geometrically describes the scene as a function of time and includes one or more types of geometric proxy data which is matched to a first set of current pipeline conditions in order to maximize the photo-realism of the free viewpoint video resulting from the scene proxy at each point in time. A current synthetic viewpoint of the scene is generated from the scene proxy. This viewpoint generation maximizes the photo-realism of the current synthetic viewpoint based upon a second set of current pipeline conditions. The current synthetic viewpoint is displayed.
摘要翻译: 生成场景的自由视点视频并呈现给用户。 传感器的布置产生传感器数据流,每个传感器数据从不同的几何视角表示场景。 传感器数据流被校准。 从校准的传感器数据流生成场景代理。 场景代理将场景几何描述为时间的函数,并且包括与当前流水线条件的第一组匹配的一种或多种类型的几何代理数据,以便最大化场景产生的自由视点视频的照片真实性 代理在每个时间点。 从场景代理产生现场的合成视点。 该视点生成基于第二组当前流水线条件使当前合成视点的照片真实性最大化。 显示当前的综合观点。
-
公开(公告)号:US20130321586A1
公开(公告)日:2013-12-05
申请号:US13588917
申请日:2012-08-17
申请人: Adam Kirk , Patrick Sweeney , Don Gillett , Neil Fishman , Kanchan Mitra , Amit Mital , David Harnett , Yaron Eshet , Simon Winder , David Eraker
发明人: Adam Kirk , Patrick Sweeney , Don Gillett , Neil Fishman , Kanchan Mitra , Amit Mital , David Harnett , Yaron Eshet , Simon Winder , David Eraker
IPC分类号: H04N13/02
CPC分类号: G06T15/04 , G06T15/08 , G06T15/205 , G06T17/00 , G06T2210/56 , H04N7/142 , H04N7/15 , H04N7/157 , H04N13/117 , H04N13/194 , H04N13/239 , H04N13/243 , H04N13/246 , H04N13/257 , H04R2227/005 , H04S2400/15
摘要: Cloud based FVV streaming technique embodiments presented herein generally employ a cloud based FVV pipeline to create, render and transmit FVV frames depicting a captured scene as would be viewed from a current synthetic viewpoint selected by an end user and received from a client computing device. The FVV frames use a similar level of bandwidth as a conventional streaming movie would consume. To change viewpoints, a new viewpoint is sent from the client to the cloud, and a new streaming movie is initiated from the new viewpoint. Frames associated with that viewpoint are created, rendered and transmitted to the client until a new viewpoint request is received.
摘要翻译: 本文呈现的基于云的FVV流技术实施例通常使用基于云的FVV流水线来创建,呈现和发送描绘所捕获场景的FVV帧,如将从最终用户选择并从客户端计算设备接收的当前合成视点所观察到的。 FVV帧使用与传统流式电影将消耗的类似的带宽水平。 为了改变观点,从客户端向云端发送一个新的观点,从新的角度出发新的流媒体电影。 与该视点相关联的帧被创建,呈现并发送到客户端,直到接收到新的视点请求为止。
-
公开(公告)号:US20130095920A1
公开(公告)日:2013-04-18
申请号:US13273213
申请日:2011-10-13
申请人: Kestutis Patiejunas , Kanchan Mitra , Patrick Sweeney , Yaron Eshet , Adam G. Kirk , Sing Bing Kang , Charles Lawrence Zitnick, III , David Eraker , David Harnett , Amit Mital , Simon Winder
发明人: Kestutis Patiejunas , Kanchan Mitra , Patrick Sweeney , Yaron Eshet , Adam G. Kirk , Sing Bing Kang , Charles Lawrence Zitnick, III , David Eraker , David Harnett , Amit Mital , Simon Winder
CPC分类号: G06T15/00 , G06T7/521 , G06T7/593 , G06T15/04 , G06T17/20 , G06T2207/10021 , G06T2207/10024 , G06T2207/10048 , G06T2207/20228 , H04N13/111 , H04N13/271 , H04N2013/0081
摘要: Methods and systems for generating free viewpoint video using an active infrared (IR) stereo module are provided. The method includes computing a depth map for a scene using an active IR stereo module. The depth map may be computed by projecting an IR dot pattern onto the scene, capturing stereo images from each of two or more synchronized IR cameras, detecting dots within the stereo images, computing feature descriptors corresponding to the dots in the stereo images, computing a disparity map between the stereo images, and generating the depth map using the disparity map. The method also includes generating a point cloud for the scene using the depth map, generating a mesh of the point cloud, and generating a projective texture map for the scene from the mesh of the point cloud. The method further includes generating the video for the scene using the projective texture map.
摘要翻译: 提供了使用主动红外(IR)立体声模块产生免费视点视频的方法和系统。 该方法包括使用主动IR立体声模块来计算场景的深度图。 可以通过将IR点图案投影到场景上来计算深度图,从两个或更多个同步红外相机中的每一个拍摄立体图像,检测立体图像内的点,计算与立体图像中的点相对应的特征描述符, 立体图像之间的视差图,并使用视差图生成深度图。 该方法还包括使用深度图生成场景的点云,生成点云的网格,并从点云的网格生成场景的投影纹理贴图。 该方法还包括使用投影纹理图生成场景的视频。
-
公开(公告)号:US20130100256A1
公开(公告)日:2013-04-25
申请号:US13278184
申请日:2011-10-21
申请人: Adam G. Kirk , Yaron Eshet , Kestutis Patiejunas , Sing Bing Kang , Charles Lawrence Zitnick, III , David Eraker , Simon Winder
发明人: Adam G. Kirk , Yaron Eshet , Kestutis Patiejunas , Sing Bing Kang , Charles Lawrence Zitnick, III , David Eraker , Simon Winder
IPC分类号: H04N13/02
CPC分类号: G06T7/0057 , G06T7/521 , G06T7/593 , G06T2207/10048
摘要: Methods and systems for generating a depth map are provided. The method includes projecting an infrared (IR) dot pattern onto a scene. The method also includes capturing stereo images from each of two or more synchronized IR cameras, detecting a number of dots within the stereo images, computing a number of feature descriptors for the dots in the stereo images, and computing a disparity map between the stereo images. The method further includes generating a depth map for the scene using the disparity map.
摘要翻译: 提供了生成深度图的方法和系统。 该方法包括将红外(IR)点图案投影到场景上。 该方法还包括从两个或多个同步红外相机中的每一个捕获立体图像,检测立体图像内的多个点,计算立体图像中的点的特征描述符的数量,以及计算立体图像之间的视差图 。 该方法还包括使用视差图生成场景的深度图。
-
公开(公告)号:US09098908B2
公开(公告)日:2015-08-04
申请号:US13278184
申请日:2011-10-21
申请人: Adam G. Kirk , Yaron Eshet , Kestutis Patiejunas , Sing Bing Kang , Charles Lawrence Zitnick, III , David Eraker , Simon Winder
发明人: Adam G. Kirk , Yaron Eshet , Kestutis Patiejunas , Sing Bing Kang , Charles Lawrence Zitnick, III , David Eraker , Simon Winder
IPC分类号: G06T7/00
CPC分类号: G06T7/0057 , G06T7/521 , G06T7/593 , G06T2207/10048
摘要: Methods and systems for generating a depth map are provided. The method includes projecting an infrared (IR) dot pattern onto a scene. The method also includes capturing stereo images from each of two or more synchronized IR cameras, detecting a number of dots within the stereo images, computing a number of feature descriptors for the dots in the stereo images, and computing a disparity map between the stereo images. The method further includes generating a depth map for the scene using the disparity map.
摘要翻译: 提供了用于生成深度图的方法和系统。 该方法包括将红外(IR)点图案投影到场景上。 该方法还包括从两个或多个同步红外相机中的每一个捕获立体图像,检测立体图像内的点数,计算立体图像中的点的特征描述符的数量,以及计算立体图像之间的视差图 。 该方法还包括使用视差图生成场景的深度图。
-
公开(公告)号:US09846960B2
公开(公告)日:2017-12-19
申请号:US13566877
申请日:2012-08-03
申请人: Adam G. Kirk , Yaron Eshet , David Eraker
发明人: Adam G. Kirk , Yaron Eshet , David Eraker
CPC分类号: G06T15/04 , G06T15/08 , G06T15/205 , G06T17/00 , G06T2210/56 , H04N7/142 , H04N7/15 , H04N7/157 , H04N13/117 , H04N13/194 , H04N13/239 , H04N13/243 , H04N13/246 , H04N13/257 , H04R2227/005 , H04S2400/15
摘要: The automated camera array calibration technique described herein pertains to a technique for automating camera array calibration. The technique can leverage corresponding depth and single or multi-spectral intensity data (e.g., RGB (Red Green Blue) data) captured by hybrid capture devices to automatically determine camera geometry. In one embodiment it does this by finding common features in the depth maps between two hybrid capture devices and derives a rough extrinsic calibration based on shared depth map features. It then uses the intensity (e.g., RGB) data corresponding to the depth maps and uses the features of the intensity (e.g., RGB) data to refine the rough extrinsic calibration.
-
公开(公告)号:US20130321589A1
公开(公告)日:2013-12-05
申请号:US13566877
申请日:2012-08-03
申请人: Adam G. Kirk , Yaron Eshet , David Eraker
发明人: Adam G. Kirk , Yaron Eshet , David Eraker
IPC分类号: H04N17/02
CPC分类号: G06T15/04 , G06T15/08 , G06T15/205 , G06T17/00 , G06T2210/56 , H04N7/142 , H04N7/15 , H04N7/157 , H04N13/117 , H04N13/194 , H04N13/239 , H04N13/243 , H04N13/246 , H04N13/257 , H04R2227/005 , H04S2400/15
摘要: The automated camera array calibration technique described herein pertains to a technique for automating camera array calibration. The technique can leverage corresponding depth and single or multi-spectral intensity data (e.g., RGB (Red Green Blue) data) captured by hybrid capture devices to automatically determine camera geometry. In one embodiment it does this by finding common features in the depth maps between two hybrid capture devices and derives a rough extrinsic calibration based on shared depth map features. It then uses the intensity (e.g., RGB) data corresponding to the depth maps and uses the features of the intensity (e.g., RGB) data to refine the rough extrinsic calibration.
摘要翻译: 本文所述的自动相机阵列校准技术涉及用于自动化相机阵列校准的技术。 该技术可以利用由混合捕获设备捕获的相应深度和单光谱或多光谱强度数据(例如,RGB(红绿蓝)数据)来自动确定相机几何形状。 在一个实施例中,它通过在两个混合捕获设备之间的深度图中找到共同特征来实现,并且基于共享深度图特征导出粗略的外在校准。 然后,它使用对应于深度图的强度(例如,RGB)数据并使用强度(例如RGB)数据的特征来细化粗糙的外在校准。
-
-
-
-
-
-