Unified framework for precise vision-aided navigation
    1.
    发明授权
    Unified framework for precise vision-aided navigation 有权
    精准视觉辅助导航的统一框架

    公开(公告)号:US08174568B2

    公开(公告)日:2012-05-08

    申请号:US11949433

    申请日:2007-12-03

    摘要: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.

    摘要翻译: 一种用于使用由多个摄像机捕获的视频信息有效地定位目标场景中的感兴趣对象的系统和方法。 该系统和方法提供多摄像机视觉测距,其中由多摄像机配置中的所有摄像机为每个摄像机生成姿态估计。 此外,系统和方法可以使用多摄像机配置中的任何摄像机来定位和识别目标场景中的突出地标,并将识别的地标与先前识别的地标的数据库进行比较。 另外,该系统和方法提供了基于视频的姿势估计与由一个或多个二次测量传感器捕获的位置测量数据的集成,例如惯性测量单元(IMU)和全球定位系统(GPS)单元 。

    Unified framework for precise vision-aided navigation
    2.
    发明授权
    Unified framework for precise vision-aided navigation 有权
    精准视觉辅助导航的统一框架

    公开(公告)号:US09121713B2

    公开(公告)日:2015-09-01

    申请号:US13451037

    申请日:2012-04-19

    摘要: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.

    摘要翻译: 一种用于使用由多个摄像机捕获的视频信息有效地定位目标场景中的感兴趣对象的系统和方法。 该系统和方法提供多摄像机视觉测距,其中由多摄像机配置中的所有摄像机为每个摄像机生成姿态估计。 此外,系统和方法可以使用多摄像机配置中的任何摄像机来定位和识别目标场景中的突出地标,并将识别的地标与先前识别的地标的数据库进行比较。 另外,该系统和方法提供了基于视频的姿势估计与由一个或多个二次测量传感器捕获的位置测量数据的集成,例如惯性测量单元(IMU)和全球定位系统(GPS)单元 。

    UNIFIED FRAMEWORK FOR PRECISE VISION-AIDED NAVIGATION
    3.
    发明申请
    UNIFIED FRAMEWORK FOR PRECISE VISION-AIDED NAVIGATION 有权
    用于精准视觉辅助导航的统一框架

    公开(公告)号:US20080167814A1

    公开(公告)日:2008-07-10

    申请号:US11949433

    申请日:2007-12-03

    IPC分类号: G01C21/00

    摘要: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.

    摘要翻译: 一种用于使用由多个摄像机捕获的视频信息有效地定位目标场景中的感兴趣对象的系统和方法。 该系统和方法提供多摄像机视觉测距,其中由多摄像机配置中的所有摄像机为每个摄像机生成姿态估计。 此外,系统和方法可以使用多摄像机配置中的任何摄像机来定位和识别目标场景中的突出地标,并将识别的地标与先前识别的地标的数据库进行比较。 另外,该系统和方法提供了基于视频的姿势估计与由一个或多个二次测量传感器捕获的位置测量数据的集成,例如惯性测量单元(IMU)和全球定位系统(GPS)单元 。

    Stereo-based visual odometry method and system
    4.
    发明授权
    Stereo-based visual odometry method and system 有权
    立体视觉测距法和系统

    公开(公告)号:US07925049B2

    公开(公告)日:2011-04-12

    申请号:US11833498

    申请日:2007-08-03

    IPC分类号: G06F9/00

    摘要: A method for estimating pose from a sequence of images, which includes the steps of detecting at least three feature points in both the left image and right image of a first pair of stereo images at a first point in time; matching the at least three feature points in the left image to the at least three feature points in the right image to obtain at least three two-dimensional feature correspondences; calculating the three-dimensional coordinates of the at least three two-dimensional feature correspondences to obtain at least three three-dimensional reference feature points; tracking the at least three feature points in one of the left image and right image of a second pair of stereo images at a second point in time different from the first point in time to obtain at least three two-dimensional reference feature points; and calculating a pose based on the at least three three-dimensional reference feature points and its corresponding two-dimensional reference feature points in the stereo images. The pose is found by minimizing projection residuals of a set of three-dimensional reference feature points in an image plane.

    摘要翻译: 一种用于从图像序列估计姿态的方法,其包括以下步骤:在第一时间点检测第一对立体图像的左图像和右图像中的至少三个特征点; 将左图像中的至少三个特征点与右图像中的至少三个特征点匹配以获得至少三个二维特征对应; 计算所述至少三个二维特征对应的三维坐标以获得至少三个三维参考特征点; 在与第一时间点不同的第二时间点跟踪第二对立体图像的左图像和右图像之一中的至少三个特征点,以获得至少三个二维参考特征点; 以及基于立体图像中的至少三个三维参考特征点及其对应的二维参考特征点来计算姿势。 通过使图像平面中的一组三维参考特征点的投影残差最小化来发现该姿势。

    Stereo-Based Visual Odometry Method and System
    5.
    发明申请
    Stereo-Based Visual Odometry Method and System 有权
    立体声视觉测距法和系统

    公开(公告)号:US20080144925A1

    公开(公告)日:2008-06-19

    申请号:US11833498

    申请日:2007-08-03

    IPC分类号: G06K9/00

    摘要: A method for estimating pose from a sequence of images, which includes the steps of detecting at least three feature points in both the left image and right image of a first pair of stereo images at a first point in time; matching the at least three feature points in the left image to the at least three feature points in the right image to obtain at least three two-dimensional feature correspondences; calculating the three-dimensional coordinates of the at least three two-dimensional feature correspondences to obtain at least three three-dimensional reference feature points; tracking the at least three feature points in one of the left image and right image of a second pair of stereo images at a second point in time different from the first point in time to obtain at least three two-dimensional reference feature points; and calculating a pose based on the at least three three-dimensional reference feature points and its corresponding two-dimensional reference feature points in the stereo images. The pose is found by minimizing projection residuals of a set of three-dimensional reference feature points in an image plane.

    摘要翻译: 一种用于从图像序列估计姿态的方法,其包括以下步骤:在第一时间点检测第一对立体图像的左图像和右图像中的至少三个特征点; 将左图像中的至少三个特征点与右图像中的至少三个特征点匹配以获得至少三个二维特征对应; 计算所述至少三个二维特征对应的三维坐标以获得至少三个三维参考特征点; 在与第一时间点不同的第二时间点跟踪第二对立体图像的左图像和右图像之一中的至少三个特征点,以获得至少三个二维参考特征点; 以及基于立体图像中的至少三个三维参考特征点及其对应的二维参考特征点来计算姿势。 通过使图像平面中的一组三维参考特征点的投影残差最小化来发现该姿势。

    SYSTEM AND METHOD FOR GENERATING A MIXED REALITY ENVIRONMENT
    7.
    发明申请
    SYSTEM AND METHOD FOR GENERATING A MIXED REALITY ENVIRONMENT 有权
    用于产生混合现实环境的系统和方法

    公开(公告)号:US20100103196A1

    公开(公告)日:2010-04-29

    申请号:US12606581

    申请日:2009-10-27

    IPC分类号: G09G5/12 G09G5/00

    摘要: A system and method for generating a mixed-reality environment is provided. The system and method provides a user-worn sub-system communicatively connected to a synthetic object computer module. The user-worn sub-system may utilize a plurality of user-worn sensors to capture and process data regarding a user's pose and location. The synthetic object computer module may generate and provide to the user-worn sub-system synthetic objects based information defining a user's real world life scene or environment indicating a user's pose and location. The synthetic objects may then be rendered on a user-worn display, thereby inserting the synthetic objects into a user's field of view. Rendering the synthetic objects on the user-worn display creates the virtual effect for the user that the synthetic objects are present in the real world.

    摘要翻译: 提供了一种用于产生混合现实环境的系统和方法。 该系统和方法提供通信地连接到合成对象计算机模块的用户佩戴的子系统。 用户佩戴的子系统可以利用多个用户佩戴的传感器来捕获和处理关于用户姿势和位置的数据。 合成对象计算机模块可以生成并提供定义用户现实世界生活场景或指示用户的姿势和位置的环境的用户佩戴的子系统合成对象的信息。 然后可以将合成对象呈现在用户佩戴的显示器上,从而将合成对象插入到用户的视野中。 在用户佩戴的显示器上渲染合成对象将为合成对象在现实世界中存在的用户创建虚拟效果。

    Schottky junction source/drain transistor and method of making
    9.
    发明授权
    Schottky junction source/drain transistor and method of making 失效
    肖特基结源极/漏极晶体管及其制造方法

    公开(公告)号:US08697529B2

    公开(公告)日:2014-04-15

    申请号:US13508731

    申请日:2011-09-28

    IPC分类号: H01L21/336 H01L21/338

    摘要: A method of making a transistor, comprising: providing a semiconductor substrate; forming a gate stack over the semiconductor substrate; forming an insulating layer over the semiconductor substrate; forming a depleting layer over the insulating layer; etching the depleting layer and the insulating layer; forming a metal layer over the semiconductor substrate; performing thermal annealing; and removing the metal layer. As advantages of the present invention, an upper outside part of each of the sidewalls include a material that can react with the metal layer, so that metal on two sides of the sidewalls is absorbed during the annealing process, preventing the metal from diffusing toward the semiconductor layer, and ensuring that the formed Schottky junctions can be ultra-thin and uniform, and have controllable and suppressed lateral growth.

    摘要翻译: 一种制造晶体管的方法,包括:提供半导体衬底; 在所述半导体衬底上形成栅叠层; 在半导体衬底上形成绝缘层; 在绝缘层上形成耗尽层; 蚀刻耗尽层和绝缘层; 在所述半导体衬底上形成金属层; 进行热退火; 并去除金属层。 作为本发明的优点,每个侧壁的上部外侧部分包括能够与金属层反应的材料,从而在退火过程中吸收侧壁两侧的金属,从而防止金属朝向 半导体层,并且确保形成的肖特基结可以是超薄和均匀的,并且具有可控和抑制的横向生长。