-
公开(公告)号:US09242171B2
公开(公告)日:2016-01-26
申请号:US13775165
申请日:2013-02-23
Applicant: Microsoft Corporation
Inventor: Richard Newcombe , Shahram Izadi , David Molyneaux , Otmar Hilliges , David Kim , Jamie Daniel Joseph Shotton , Pushmeet Kohli , Andrew Fitzgibbon , Stephen Edward Hodges , David Alexander Butler
CPC classification number: A63F13/00 , A63F13/06 , A63F2300/1087 , A63F2300/69 , G06K9/00 , G06K9/00664 , G06T7/251 , G06T7/30 , G06T2207/10028 , G06T2207/10048 , G06T2207/30244
Abstract: Real-time camera tracking using depth maps is described. In an embodiment depth map frames are captured by a mobile depth camera at over 20 frames per second and used to dynamically update in real-time a set of registration parameters which specify how the mobile depth camera has moved. In examples the real-time camera tracking output is used for computer game applications and robotics. In an example, an iterative closest point process is used with projective data association and a point-to-plane error metric in order to compute the updated registration parameters. In an example, a graphics processing unit (GPU) implementation is used to optimize the error metric in real-time. In some embodiments, a dense 3D model of the mobile camera environment is used.
-
公开(公告)号:US20150248764A1
公开(公告)日:2015-09-03
申请号:US14193686
申请日:2014-02-28
Applicant: Microsoft Corporation
Inventor: Cem Keskin , Sean Ryan Francesco Fanello , Shahram Izadi , Pushmeet Kohli , David Kim , David Sweeney , Jamie Daniel Joesph Shotton , Duncan Paul Robertson , Sing Bing Kang
CPC classification number: H04N5/33 , G06K9/00201 , G06K9/2018 , G06K9/6219 , G06K9/6282 , G06T7/521 , G06T2200/04 , G06T2207/10028 , G06T2207/10048 , G06T2207/20081 , G06T2207/30201
Abstract: A method of sensing depth using an infrared camera. In an example method, an infrared image of a scene is received from an infrared camera. The infrared image is applied to a trained machine learning component which uses the intensity of image elements to assign all or some of the image elements a depth value which represents the distance between the surface depicted by the image element and the infrared camera. In various examples, the machine line component comprises one or more random decision forests.
Abstract translation: 使用红外摄像机感测深度的方法。 在一个示例方法中,从红外相机接收场景的红外图像。 将红外图像应用于训练有素的机器学习部件,其使用图像元素的强度来分配全部或部分图像元素,该深度值表示由图像元素描绘的表面与红外相机之间的距离。 在各种示例中,机器线路部件包括一个或多个随机决策树。
-
公开(公告)号:US20140184749A1
公开(公告)日:2014-07-03
申请号:US13729324
申请日:2012-12-28
Applicant: MICROSOFT CORPORATION
Inventor: Otmar Hilliges , Malte Hanno Weiss , Shahram Izadi , David Kim , Carsten Curt Eckard Rother
CPC classification number: G06T15/00 , G01S17/89 , G06T7/246 , G06T7/586 , G06T17/00 , G06T19/006 , G06T2207/10016 , G06T2207/10024 , G06T2207/10028 , G06T2207/30244 , H04N13/20
Abstract: Detecting material properties such reflectivity, true color and other properties of surfaces in a real world environment is described in various examples using a single hand-held device. For example, the detected material properties are calculated using a photometric stereo system which exploits known relationships between lighting conditions, surface normals, true color and image intensity. In examples, a user moves around in an environment capturing color images of surfaces in the scene from different orientations under known lighting conditions. In various examples, surfaces normals of patches of surfaces are calculated using the captured data to enable fine detail such as human hair, netting, textured surfaces to be modeled. In examples, the modeled data is used to render images depicting the scene with realism or to superimpose virtual graphics on the real world in a realistic manner.
Abstract translation: 使用单个手持装置在各种示例中描述了在真实世界环境中检测材料性质的这种反射率,真实颜色和表面的其它性质。 例如,使用光度立体声系统计算检测到的材料性质,该系统利用照明条件,表面法线,真实颜色和图像强度之间的已知关系。 在示例中,用户在从已知照明条件下的不同取向捕获场景中的表面的彩色图像的环境中移动。 在各种示例中,使用捕获的数据计算表面贴片的法线,以使得可以对诸如人的头发,网状织物,纹理表面的精细细节进行建模。 在示例中,建模的数据用于以现实的方式呈现描绘场景的图像,或以现实的方式将虚拟图形叠加在现实世界中。
-
公开(公告)号:US20150199018A1
公开(公告)日:2015-07-16
申请号:US14154571
申请日:2014-01-14
Applicant: Microsoft Corporation
Inventor: David Kim , Shahram Izadi , Vivek Pradeep , Steven Bathiche , Timothy Andrew Large , Karlton David Powell
CPC classification number: G06F3/017 , G01B11/24 , G06F3/0325 , G06F3/0416 , G06F3/0421 , G06F3/0425 , G06K9/00355 , G06K9/00362 , G06K9/2036 , G06K9/4604 , G06K9/52 , H04N5/33 , H04N13/204
Abstract: A 3D silhouette sensing system is described which comprises a stereo camera and a light source. In an embodiment, a 3D sensing module triggers the capture of pairs of images by the stereo camera at the same time that the light source illuminates the scene. A series of pairs of images may be captured at a predefined frame rate. Each pair of images is then analyzed to track both a retroreflector in the scene, which can be moved relative to the stereo camera, and an object which is between the retroreflector and the stereo camera and therefore partially occludes the retroreflector. In processing the image pairs, silhouettes are extracted for each of the retroreflector and the object and these are used to generate a 3D contour for each of the retroreflector and object.
Abstract translation: 描述了包括立体相机和光源的3D轮廓感测系统。 在一个实施例中,3D感测模块在光源照亮场景的同时触发立体摄像机拍摄图像对。 可以以预定的帧速率捕获一系列图像对。 然后分析每对图像以跟踪可以相对于立体相机移动的场景中的后向反射器,以及位于后向反射器和立体相机之间并因此部分遮挡后向反射器的物体。 在处理图像对时,为后向反射器和对象中的每一个提取剪影,并且这些剪影用于为每个后向反射器和对象生成3D轮廓。
-
公开(公告)号:US20140098018A1
公开(公告)日:2014-04-10
申请号:US13644701
申请日:2012-10-04
Applicant: MICROSOFT CORPORATION
Inventor: David Kim , Shahram Izadi , Otmar Hilliges , David Alexander Butler , Stephen Hodges , Patrick Luke Olivier , Jiawen Chen , Iason Oikonomidis
IPC: G06F3/01
CPC classification number: G06F3/014 , G06F3/015 , G06F3/017 , G06F3/0304
Abstract: A wearable sensor for tracking articulated body parts is described such as a wrist-worn device which enables 3D tracking of fingers and optionally also the arm and hand without the need to wear a glove or markers on the hand. In an embodiment a camera captures images of an articulated part of a body of a wearer of the device and an articulated model of the body part is tracked in real time to enable gesture-based control of a separate computing device such as a smart phone, laptop computer or other computing device. In examples the device has a structured illumination source and a diffuse illumination source for illuminating the articulated body part. In some examples an inertial measurement unit is also included in the sensor to enable tracking of the arm and hand
Abstract translation: 描述了用于跟踪铰接的身体部位的可穿戴式传感器,例如能够对手指进行3D跟踪以及可选地手臂和手的腕部佩戴的装置,而不需要在手上戴上手套或标记。 在一个实施例中,照相机捕获设备的佩戴者的身体的关节部分的图像,并实时跟踪身体部位的关节运动模型,以允许诸如智能电话的单独的计算设备的基于姿势的控制, 笔记本电脑或其他计算设备。 在实例中,该装置具有结构化照明源和用于照亮铰接体部分的漫射照明源。 在一些示例中,惯性测量单元也包括在传感器中以使得能够跟踪手臂和手
-
公开(公告)号:US09552673B2
公开(公告)日:2017-01-24
申请号:US13653968
申请日:2012-10-17
Applicant: Microsoft Corporation
Inventor: Otmar Hilliges , David Kim , Shahram Izadi , Malte Hanno Weiss
CPC classification number: G06F3/017 , G06F3/011 , G06F17/5009 , G06T7/20 , G06T7/269 , G06T7/73 , G06T15/08 , G06T19/006 , G06T19/20 , G06T2207/10024 , G06T2207/10028 , G06T2219/2016 , G06T2219/2021
Abstract: An augmented reality system which enables grasping of virtual objects is described such as to stack virtual cubes or to manipulate virtual objects in other ways. In various embodiments a user's hand or another real object is tracked in an augmented reality environment. In examples, the shape of the tracked real object is approximated using at least two different types of particles and the virtual objects are updated according to simulated forces exerted between the augmented reality environment and at least some of the particles. In various embodiments 3D positions of a first one of the types of particles, kinematic particles, are updated according to the tracked real object; and passive particles move with linked kinematic particles without penetrating virtual objects. In some examples a real-time optic flow process is used to track motion of the real object.
Abstract translation: 描述了能够掌握虚拟对象的增强现实系统,例如堆叠虚拟立方体或以其他方式操纵虚拟对象。 在各种实施例中,在增强现实环境中跟踪用户的手或另一实际对象。 在示例中,使用至少两种不同类型的粒子来近似跟踪的真实对象的形状,并且根据在增强的现实环境和至少一些粒子之间施加的模拟力来更新虚拟对象。 在各种实施例中,根据跟踪的真实物体更新粒子类型,运动学粒子中的第一种类型的3D位置; 并且被动粒子与连接的运动粒子一起移动,而不会穿透虚拟物体。 在一些实例中,实时光流过程用于跟踪真实物体的运动。
-
公开(公告)号:US09380224B2
公开(公告)日:2016-06-28
申请号:US14193686
申请日:2014-02-28
Applicant: Microsoft Corporation
Inventor: Cem Keskin , Sean Ryan Francesco Fanello , Shahram Izadi , Pushmeet Kohli , David Kim , David Sweeney , Jamie Daniel Joseph Shotton , Duncan Paul Robertson , Sing Bing Kang
CPC classification number: H04N5/33 , G06K9/00201 , G06K9/2018 , G06K9/6219 , G06K9/6282 , G06T7/521 , G06T2200/04 , G06T2207/10028 , G06T2207/10048 , G06T2207/20081 , G06T2207/30201
Abstract: A method of sensing depth using an infrared camera. In an example method, an infrared image of a scene is received from an infrared camera. The infrared image is applied to a trained machine learning component which uses the intensity of image elements to assign all or some of the image elements a depth value which represents the distance between the surface depicted by the image element and the infrared camera. In various examples, the machine line component comprises one or more random decision forests.
Abstract translation: 使用红外摄像机感测深度的方法。 在一个示例方法中,从红外相机接收场景的红外图像。 将红外图像应用于训练有素的机器学习部件,其使用图像元素的强度来分配全部或部分图像元素,该深度值表示由图像元素描绘的表面与红外相机之间的距离。 在各种示例中,机器线路部件包括一个或多个随机决策树。
-
公开(公告)号:US09269018B2
公开(公告)日:2016-02-23
申请号:US14154825
申请日:2014-01-14
Applicant: Microsoft Corporation
Inventor: David Kim , Shahram Izadi , Christoph Rhemann , Christopher Zach
CPC classification number: G06K9/4638 , G06T7/593 , G06T2200/04 , G06T2207/10028 , G06T2210/12 , H04N13/128 , H04N2013/0081
Abstract: A computer-implemented stereo image processing method which uses contours is described. In an embodiment, contours are extracted from two silhouette images captured at substantially the same time by a stereo camera of at least part of an object in a scene. Stereo correspondences between contour points on corresponding scanlines in the two contour images (one corresponding to each silhouette image in the stereo pair) are calculated on the basis of contour point comparison metrics, such as the compatibility of the normal of the contours and/or a distance along the scanline between the point and a centroid of the contour. A corresponding system is also described.
Abstract translation: 描述了使用轮廓的计算机实现的立体图像处理方法。 在一个实施例中,从场景中的对象的至少一部分的立体照相机在基本上同时捕获的两个轮廓图像中提取轮廓。 基于轮廓点比较度量来计算两个轮廓图像中的相应扫描线上的轮廓点之间的立体声对应(一个对应于立体对中的每个轮廓图像),例如轮廓线的法线和/或 点与轮廓的质心之间的扫描线的距离。 还描述了相应的系统。
-
公开(公告)号:US20140104274A1
公开(公告)日:2014-04-17
申请号:US13653968
申请日:2012-10-17
Applicant: MICROSOFT CORPORATION
Inventor: Otmar Hilliges , David Kim , Shahram Izadi , Malte Hanno Weiss
CPC classification number: G06F3/017 , G06F3/011 , G06F17/5009 , G06T7/20 , G06T7/269 , G06T7/73 , G06T15/08 , G06T19/006 , G06T19/20 , G06T2207/10024 , G06T2207/10028 , G06T2219/2016 , G06T2219/2021
Abstract: An augmented reality system which enables grasping of virtual objects is described such as to stack virtual cubes or to manipulate virtual objects in other ways. In various embodiments a user's hand or another real object is tracked in an augmented reality environment. In examples, the shape of the tracked real object is approximated using at least two different types of particles and the virtual objects are updated according to simulated forces exerted between the augmented reality environment and at least some of the particles. In various embodiments 3D positions of a first one of the types of particles, kinematic particles, are updated according to the tracked real object; and passive particles move with linked kinematic particles without penetrating virtual objects. In some examples a real-time optic flow process is used to track motion of the real object.
Abstract translation: 描述了能够掌握虚拟对象的增强现实系统,例如堆叠虚拟立方体或以其他方式操纵虚拟对象。 在各种实施例中,在增强现实环境中跟踪用户的手或另一实际对象。 在示例中,使用至少两种不同类型的粒子来近似跟踪的真实对象的形状,并且根据在增强的现实环境和至少一些粒子之间施加的模拟力来更新虚拟对象。 在各种实施例中,根据跟踪的真实物体更新粒子类型,运动学粒子中的第一种类型的3D位置; 并且被动粒子与连接的运动粒子一起移动,而不会穿透虚拟物体。 在一些实例中,实时光流过程用于跟踪真实物体的运动。
-
公开(公告)号:US20130244782A1
公开(公告)日:2013-09-19
申请号:US13775165
申请日:2013-02-23
Applicant: MICROSOFT CORPORATION
Inventor: Richard Newcombe , Shahram Izadi , David Molyneaux , Otmar Hilliges , David Kim , Jamie Daniel Joseph Shotton , Pushmeet Kohli , Andrew Fitzgibbon , Stephen Edward Hodges , David Alexander Butler
CPC classification number: A63F13/00 , A63F13/06 , A63F2300/1087 , A63F2300/69 , G06K9/00 , G06K9/00664 , G06T7/251 , G06T7/30 , G06T2207/10028 , G06T2207/10048 , G06T2207/30244
Abstract: Real-time camera tracking using depth maps is described. In an embodiment depth map frames are captured by a mobile depth camera at over 20 frames per second and used to dynamically update in real-time a set of registration parameters which specify how the mobile depth camera has moved. In examples the real-time camera tracking output is used for computer game applications and robotics. In an example, an iterative closest point process is used with projective data association and a point-to-plane error metric in order to compute the updated registration parameters. In an example, a graphics processing unit (GPU) implementation is used to optimize the error metric in real-time. In some embodiments, a dense 3D model of the mobile camera environment is used.
Abstract translation: 描述使用深度图的实时相机跟踪。 在一个实施例中,深度图帧由移动深度相机以每秒20帧的速度捕获,并且用于实时动态地更新一组注册参数,这些参数指定移动深度相机已经移动。 在实例中,实时相机跟踪输出用于计算机游戏应用和机器人。 在一个例子中,迭代最近点处理与投影数据关联和点到平面误差度量一起使用,以便计算更新的注册参数。 在一个例子中,使用图形处理单元(GPU)实现来实时优化误差度量。 在一些实施例中,使用移动照相机环境的密集3D模型。
-
-
-
-
-
-
-
-
-