-
公开(公告)号:US10062010B2
公开(公告)日:2018-08-28
申请号:US14752093
申请日:2015-06-26
Applicant: INTEL CORPORATION
Inventor: Gershom Kutliroff
CPC classification number: G06N3/04 , G05D1/0246 , G06K9/00664 , G06K9/6223 , G06K9/6262 , G06K9/6272 , G06N3/0454 , G06N3/08 , G06N3/084 , G06T7/73 , G06T2207/20081 , G06T2207/20084 , G06T2207/30244
Abstract: SLAM systems are provided that utilize an artificial neural network to both map environments and locate positions within the environments. In some example embodiments, a sensor arrangement is used to map an environment. The sensor arrangement acquires sensor data from the various sensors and associates the sensor data, or data derived from the sensor data, with spatial regions in the environment. The sensor data may include image data and inertial measurement data that effectively describes the visual appearance of a spatial region at a particular location and orientation. This diverse sensor data may be fused into camera poses. The map of the environment includes camera poses organized by spatial region within the environment. Further, in these examples, an artificial neural network is adapted to the features of the environment by a transfer learning process using image data associated with camera poses.
-
公开(公告)号:US20180018805A1
公开(公告)日:2018-01-18
申请号:US15209014
申请日:2016-07-13
Applicant: INTEL CORPORATION
Inventor: Gershom Kutliroff , Shahar Fleishman , Mark Kliger
IPC: G06T15/00 , G06K9/46 , G06K9/52 , G06T11/60 , G06T7/60 , H04N13/00 , G06T17/00 , H04N13/02 , G06K9/62
CPC classification number: G06T7/60 , G06K9/00671 , G06T7/33 , G06T7/50 , G06T7/90 , G06T11/60 , G06T17/00 , G06T2207/20021 , G06T2207/20048 , G06T2207/20212 , G06T2207/30244 , G06T2210/56 , H04N13/111 , H04N13/128 , H04N13/257 , H04N13/271
Abstract: Techniques are provided for context-based 3D scene reconstruction employing fusion of multiple instances of an object within the scene. A methodology implementing the techniques according to an embodiment includes receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, based on the 3D reconstruction, the camera pose and the image frames. The method may further include classifying the detected objects into one or more object classes; grouping two or more instances of objects in one of the object classes based on a measure of similarity of features between the object instances; and combining point clouds associated with each of the grouped object instances to generate a fused object.
-
公开(公告)号:US09639943B1
公开(公告)日:2017-05-02
申请号:US14976021
申请日:2015-12-21
Applicant: INTEL CORPORATION
Inventor: Gershom Kutliroff , Maoz Madmony
CPC classification number: G06T7/0051 , G06T7/0081 , G06T7/55 , G06T7/74 , G06T17/00 , G06T2200/04 , G06T2200/08 , G06T2207/10024 , G06T2207/10028 , G06T2207/20156
Abstract: Techniques are provided for generating a 3-Dimensional (3D) reconstruction of a handheld object. An example method may include receiving 3D image frames of the object from a static depth camera, each frame including a color image and a depth map. Each of the frames is associated with an updated pose of the object during scanning. The method may also include extracting a segment of each frame corresponding to the object and the hand; isolating the hand from each extracted segment; and filtering each frame by removing regions of the frame outside of the extracted segment and removing regions of the frame corresponding to the isolated hand. The method may further include calculating the updated object pose in each filtered frame; and calculating a 3D position for each depth pixel from the depth map of each filtered frame based on the updated object pose associated with that filtered frame.
-
-