USING PHOTOMETRIC STEREO FOR 3D ENVIRONMENT MODELING
    3.
    发明申请
    USING PHOTOMETRIC STEREO FOR 3D ENVIRONMENT MODELING 有权
    使用光学立体进行3D环境建模

    公开(公告)号:US20140184749A1

    公开(公告)日:2014-07-03

    申请号:US13729324

    申请日:2012-12-28

    Abstract: Detecting material properties such reflectivity, true color and other properties of surfaces in a real world environment is described in various examples using a single hand-held device. For example, the detected material properties are calculated using a photometric stereo system which exploits known relationships between lighting conditions, surface normals, true color and image intensity. In examples, a user moves around in an environment capturing color images of surfaces in the scene from different orientations under known lighting conditions. In various examples, surfaces normals of patches of surfaces are calculated using the captured data to enable fine detail such as human hair, netting, textured surfaces to be modeled. In examples, the modeled data is used to render images depicting the scene with realism or to superimpose virtual graphics on the real world in a realistic manner.

    Abstract translation: 使用单个手持装置在各种示例中描述了在真实世界环境中检测材料性质的这种反射率,真实颜色和表面的其它性质。 例如,使用光度立体声系统计算检测到的材料性质,该系统利用照明条件,表面法线,真实颜色和图像强度之间的已知关系。 在示例中,用户在从已知照明条件下的不同取向捕获场景中的表面的彩色图像的环境中移动。 在各种示例中,使用捕获的数据计算表面贴片的法线,以使得可以对诸如人的头发,网状织物,纹理表面的精细细节进行建模。 在示例中,建模的数据用于以现实的方式呈现描绘场景的图像,或以现实的方式将虚拟图形叠加在现实世界中。

    3D SILHOUETTE SENSING SYSTEM
    4.
    发明申请
    3D SILHOUETTE SENSING SYSTEM 有权
    3D SILHOUETTE感应系统

    公开(公告)号:US20150199018A1

    公开(公告)日:2015-07-16

    申请号:US14154571

    申请日:2014-01-14

    Abstract: A 3D silhouette sensing system is described which comprises a stereo camera and a light source. In an embodiment, a 3D sensing module triggers the capture of pairs of images by the stereo camera at the same time that the light source illuminates the scene. A series of pairs of images may be captured at a predefined frame rate. Each pair of images is then analyzed to track both a retroreflector in the scene, which can be moved relative to the stereo camera, and an object which is between the retroreflector and the stereo camera and therefore partially occludes the retroreflector. In processing the image pairs, silhouettes are extracted for each of the retroreflector and the object and these are used to generate a 3D contour for each of the retroreflector and object.

    Abstract translation: 描述了包括立体相机和光源的3D轮廓感测系统。 在一个实施例中,3D感测模块在光源照亮场景的同时触发立体摄像机拍摄图像对。 可以以预定的帧速率捕获一系列图像对。 然后分析每对图像以跟踪可以相对于立体相机移动的场景中的后向反射器,以及位于后向反射器和立体相机之间并因此部分遮挡后向反射器的物体。 在处理图像对时,为后向反射器和对象中的每一个提取剪影,并且这些剪影用于为每个后向反射器和对象生成3D轮廓。

    WEARABLE SENSOR FOR TRACKING ARTICULATED BODY-PARTS
    5.
    发明申请
    WEARABLE SENSOR FOR TRACKING ARTICULATED BODY-PARTS 审中-公开
    用于跟踪铰接体的磨损传感器

    公开(公告)号:US20140098018A1

    公开(公告)日:2014-04-10

    申请号:US13644701

    申请日:2012-10-04

    CPC classification number: G06F3/014 G06F3/015 G06F3/017 G06F3/0304

    Abstract: A wearable sensor for tracking articulated body parts is described such as a wrist-worn device which enables 3D tracking of fingers and optionally also the arm and hand without the need to wear a glove or markers on the hand. In an embodiment a camera captures images of an articulated part of a body of a wearer of the device and an articulated model of the body part is tracked in real time to enable gesture-based control of a separate computing device such as a smart phone, laptop computer or other computing device. In examples the device has a structured illumination source and a diffuse illumination source for illuminating the articulated body part. In some examples an inertial measurement unit is also included in the sensor to enable tracking of the arm and hand

    Abstract translation: 描述了用于跟踪铰接的身体部位的可穿戴式传感器,例如能够对手指进行3D跟踪以及可选地手臂和手的腕部佩戴的装置,而不需要在手上戴上手套或标记。 在一个实施例中,照相机捕获设备的佩戴者的身体的关节部分的图像,并实时跟踪身体部位的关节运动模型,以允许诸如智能电话的单独的计算设备的基于姿势的控制, 笔记本电脑或其他计算设备。 在实例中,该装置具有结构化照明源和用于照亮铰接体部分的漫射照明源。 在一些示例中,惯性测量单元也包括在传感器中以使得能够跟踪手臂和手

    Grasping virtual objects in augmented reality
    6.
    发明授权
    Grasping virtual objects in augmented reality 有权
    在增强现实中掌握虚拟对象

    公开(公告)号:US09552673B2

    公开(公告)日:2017-01-24

    申请号:US13653968

    申请日:2012-10-17

    Abstract: An augmented reality system which enables grasping of virtual objects is described such as to stack virtual cubes or to manipulate virtual objects in other ways. In various embodiments a user's hand or another real object is tracked in an augmented reality environment. In examples, the shape of the tracked real object is approximated using at least two different types of particles and the virtual objects are updated according to simulated forces exerted between the augmented reality environment and at least some of the particles. In various embodiments 3D positions of a first one of the types of particles, kinematic particles, are updated according to the tracked real object; and passive particles move with linked kinematic particles without penetrating virtual objects. In some examples a real-time optic flow process is used to track motion of the real object.

    Abstract translation: 描述了能够掌握虚拟对象的增强现实系统,例如堆叠虚拟立方体或以其他方式操纵虚拟对象。 在各种实施例中,在增强现实环境中跟踪用户的手或另一实际对象。 在示例中,使用至少两种不同类型的粒子来近似跟踪的真实对象的形状,并且根据在增强的现实环境和至少一些粒子之间施加的模拟力来更新虚拟对象。 在各种实施例中,根据跟踪的真实物体更新粒子类型,运动学粒子中的第一种类型的3D位置; 并且被动粒子与连接的运动粒子一起移动,而不会穿透虚拟物体。 在一些实例中,实时光流过程用于跟踪真实物体的运动。

    Stereo image processing using contours
    8.
    发明授权
    Stereo image processing using contours 有权
    使用轮廓的立体图像处理

    公开(公告)号:US09269018B2

    公开(公告)日:2016-02-23

    申请号:US14154825

    申请日:2014-01-14

    Abstract: A computer-implemented stereo image processing method which uses contours is described. In an embodiment, contours are extracted from two silhouette images captured at substantially the same time by a stereo camera of at least part of an object in a scene. Stereo correspondences between contour points on corresponding scanlines in the two contour images (one corresponding to each silhouette image in the stereo pair) are calculated on the basis of contour point comparison metrics, such as the compatibility of the normal of the contours and/or a distance along the scanline between the point and a centroid of the contour. A corresponding system is also described.

    Abstract translation: 描述了使用轮廓的计算机实现的立体图像处理方法。 在一个实施例中,从场景中的对象的至少一部分的立体照相机在基本上同时捕获的两个轮廓图像中提取轮廓。 基于轮廓点比较度量来计算两个轮廓图像中的相应扫描线上的轮廓点之间的立体声对应(一个对应于立体对中的每个轮廓图像),例如轮廓线的法线和/或 点与轮廓的质心之间的扫描线的距离。 还描述了相应的系统。

    GRASPING VIRTUAL OBJECTS IN AUGMENTED REALITY
    9.
    发明申请
    GRASPING VIRTUAL OBJECTS IN AUGMENTED REALITY 有权
    在现实中绘制虚拟对象

    公开(公告)号:US20140104274A1

    公开(公告)日:2014-04-17

    申请号:US13653968

    申请日:2012-10-17

    Abstract: An augmented reality system which enables grasping of virtual objects is described such as to stack virtual cubes or to manipulate virtual objects in other ways. In various embodiments a user's hand or another real object is tracked in an augmented reality environment. In examples, the shape of the tracked real object is approximated using at least two different types of particles and the virtual objects are updated according to simulated forces exerted between the augmented reality environment and at least some of the particles. In various embodiments 3D positions of a first one of the types of particles, kinematic particles, are updated according to the tracked real object; and passive particles move with linked kinematic particles without penetrating virtual objects. In some examples a real-time optic flow process is used to track motion of the real object.

    Abstract translation: 描述了能够掌握虚拟对象的增强现实系统,例如堆叠虚拟立方体或以其他方式操纵虚拟对象。 在各种实施例中,在增强现实环境中跟踪用户的手或另一实际对象。 在示例中,使用至少两种不同类型的粒子来近似跟踪的真实对象的形状,并且根据在增强的现实环境和至少一些粒子之间施加的模拟力来更新虚拟对象。 在各种实施例中,根据跟踪的真实物体更新粒子类型,运动学粒子中的第一种类型的3D位置; 并且被动粒子与连接的运动粒子一起移动,而不会穿透虚拟物体。 在一些实例中,实时光流过程用于跟踪真实物体的运动。

    REAL-TIME CAMERA TRACKING USING DEPTH MAPS
    10.
    发明申请
    REAL-TIME CAMERA TRACKING USING DEPTH MAPS 有权
    实时摄像机跟踪使用深度MAPS

    公开(公告)号:US20130244782A1

    公开(公告)日:2013-09-19

    申请号:US13775165

    申请日:2013-02-23

    Abstract: Real-time camera tracking using depth maps is described. In an embodiment depth map frames are captured by a mobile depth camera at over 20 frames per second and used to dynamically update in real-time a set of registration parameters which specify how the mobile depth camera has moved. In examples the real-time camera tracking output is used for computer game applications and robotics. In an example, an iterative closest point process is used with projective data association and a point-to-plane error metric in order to compute the updated registration parameters. In an example, a graphics processing unit (GPU) implementation is used to optimize the error metric in real-time. In some embodiments, a dense 3D model of the mobile camera environment is used.

    Abstract translation: 描述使用深度图的实时相机跟踪。 在一个实施例中,深度图帧由移动深度相机以每秒20帧的速度捕获,并且用于实时动态地更新一组注册参数,这些参数指定移动深度相机已经移动。 在实例中,实时相机跟踪输出用于计算机游戏应用和机器人。 在一个例子中,迭代最近点处理与投影数据关联和点到平面误差度量一起使用,以便计算更新的注册参数。 在一个例子中,使用图形处理单元(GPU)实现来实时优化误差度量。 在一些实施例中,使用移动照相机环境的密集3D模型。

Patent Agency Ranking