Abstract:
A method and apparatus for automatically generating a three-dimensional computer model from a "point cloud" of a scene produced by a laser radar (LIDAR) system. Given a point cloud of an indoor or outdoor scene, the method extracts certain structures from the imaged scene, i.e., ceiling, floor, furniture, rooftops, ground, and the like, and models these structures with planes and/or prismatic structures to achieve a three-dimensional computer model of the scene. The method may then add photographic and/or synthetic texturing to the model to achieve a realistic model.
Abstract:
A method and apparatus for tracking a movable object using a plurality of images, each of which is separated by an interval of time is disclosed. The plurality of images includes first and second images. The method and apparatus include elements for aligning the first and second images as a function of (i) at least one feature of a first movable object captured in the first image, and (ii) at least one feature of a second movable object captured in the second image; and after aligning the first and second images, comparing at least one portion of the first image with at least one portion of the second image.
Abstract:
A method and apparatus for automatically generating a three-dimensional computer model from a "point cloud" of a scene produced by a laser radar (LIDAR) system (114 in Figure 9). Given a point cloud of an indoor or outdoor scene, the method extracts certain structures from the imaged scene, i.e., ceiling, floor, furniture, rooftops, ground, and the like, and models (904) these structures with planes and/or prismatic structures to achieve a three-dimensional computer model (902) of the scene. The method may then add photographic and/or synthetic texturing to the model to achieve a realistic model.
Abstract:
A method and apparatus (10) for tracking a movable object (16) using a plurality of images, each of which is separated by an interval of time . The plurality of images includes first and second images. The method and apparatus include elements for aligning the first and second images as a function of (i) at least one feature of a first movable object captured in the first image, and (ii) at least one feature of a second movable object captured in the second image; and after aligning the first and second images, comparing at least one portion of the first image with at least one portion of the second image.
Abstract:
A method and apparatus for performing two-dimensional video alignment onto three-dimensional point clouds. The system recovers camera pose from camera video, determines a depth map, converts the depth map to a Euclidean video point cloud, and registers two-dimensional video to the three-dimensional point clouds.
Abstract:
A method and apparatus for performing two-dimensional video alignment onto three-dimensional point clouds. The system recovers camera pose from camera video, determines a depth map, converts the depth map to a Euclidean video point cloud, and registers two-dimensional video to the three-dimensional point clouds.