-
公开(公告)号:US11854231B2
公开(公告)日:2023-12-26
申请号:US17589523
申请日:2022-01-31
Applicant: Snap Inc.
Inventor: Maria Jose Garcia Sopo , Qi Pan , Edward James Rosten
CPC classification number: G06T7/75 , G06T19/006 , G06T2207/30244
Abstract: Determining the position and orientation (or “pose”) of an augmented reality device includes capturing an image of a scene having a number of features and extracting descriptors of features of the scene represented in the image. The descriptors are matched to landmarks in a 3D model of the scene to generate sets of matches between the descriptors and the landmarks. Estimated poses are determined from at least some of the sets of matches between the descriptors and the landmarks. Estimated poses having deviations from an observed location measurement that are greater than a threshold value may be eliminated. Features used in the determination of estimated poses may also be weighted by the inverse of the distance between the feature and the device, so that closer features are accorded more weight.
-
公开(公告)号:US11810316B2
公开(公告)日:2023-11-07
申请号:US17714720
申请日:2022-04-06
Applicant: Snap Inc.
Inventor: Patrick Fox-Roberts , Richard McCormack , Qi Pan , Edward James Rosten
CPC classification number: G06T7/70 , G06T3/0093 , G06T5/006 , G06T2207/20216
Abstract: The pose of a wide-angle image is determined by dewarping regions of the wide-angle image, determining estimated poses of the dewarped regions of the wide-angle image and deriving a pose of the wide-angle image from the estimated poses of the of the dewarped regions. The estimated poses of the dewarped regions may be determined by comparing features in the dewarped regions with features in prior dewarped regions from one or more prior wide-angle images, as well as by comparing features in the dewarped regions with features in a point cloud.
-
公开(公告)号:US20230177708A1
公开(公告)日:2023-06-08
申请号:US18061775
申请日:2022-12-05
Applicant: Snap Inc.
Inventor: Erick Mendez Mendez , Isac Andreas Müller Sandvik , Qi Pan , Edward James Rosten , Andrew Tristan Spek , Daniel Wagner , Jakob Zillner
Abstract: A depth estimation system to perform operations that include: receiving image data generated by a client device, the image data comprising a depiction of an environment; identifying a set of image features based on the image data; determining a pose of the client device based on the set of features; generating a depth estimation based on the image data and the pose of the client device; and generating a mesh model of the environment based on the depth estimation.
-
-