-
1.
公开(公告)号:US20220028093A1
公开(公告)日:2022-01-27
申请号:US16936344
申请日:2020-07-22
Applicant: Microsoft Technology Licensing, LLC
Abstract: A system for reducing a search area for identifying correspondences identifies an overlap region within a first match frame captured by a match camera. The overlap region includes one or more points of the first match frame that are associated with one or more same portions of an environment as one or more corresponding points of a first reference frame captured by a reference camera. The system obtains a second reference frame captured by the reference camera and a second match frame captured by the match camera. The system identifies a reference camera transformation matrix, and/or a match camera transformation matrix. The system defines a search area within the second match frame based on the overlap region and the reference camera transformation matrix and/or the match camera transformation matrix.
-
公开(公告)号:US20230154032A1
公开(公告)日:2023-05-18
申请号:US17592500
申请日:2022-02-03
Applicant: Microsoft Technology Licensing, LLC
Inventor: Sudipta Narayan SINHA , Ondrej MIKSIK , Joseph Michael DEGOL , Tien DO
IPC: G06T7/70 , G06V20/64 , G06V10/778 , G06V10/774
CPC classification number: G06T7/70 , G06V10/7747 , G06V10/7784 , G06V20/64 , G06T2207/10016 , G06T2207/20081
Abstract: In various embodiments there is a method for camera localization within a scene. An image of a scene captured by the camera is input to a machine learning model, which has been trained for the particular scene to detect a plurality of 3D scene landmarks. The 3D scene landmarks are pre-specified in a pre-built map of the scene. The machine learning model outputs a plurality of predictions, each prediction comprising: either a 2D location in the image which is predicted to depict one of the 3D scene landmarks, or a 3D bearing vector, being a vector originating at the camera and pointing towards a predicted 3D location of one of the 3D scene landmarks. Using the predictions, an estimate of a position and orientation of the camera in the pre-built map of the scene is computed.
-
公开(公告)号:US20220171187A1
公开(公告)日:2022-06-02
申请号:US17108673
申请日:2020-12-01
Applicant: Microsoft Technology Licensing, LLC
Abstract: Techniques for updating a position of overlaid image content using IMU data to reflect subsequent changes in camera positions to minimize latency effects are disclosed. A “system camera” refers to an integrated camera that is a part of an HMD. An “external camera” is a camera that is separated from the HMD. The system camera and the external camera generate images. These images are overlaid on one another and aligned to form an overlaid image. Content from the external camera image is surrounded by a bounding element in the overlaid image. IMU data associated with both the system camera and the external camera is obtained. Based on that IMU data, an amount of movement that the system camera and/or the external camera have moved since the images were originally generated is determined. Based on that movement, the bounding element is shifted to a new position in the overlaid image.
-
公开(公告)号:US20230148231A1
公开(公告)日:2023-05-11
申请号:US17524270
申请日:2021-11-11
Applicant: Microsoft Technology Licensing, LLC
CPC classification number: G06T15/205 , G06T7/33 , G06T7/97
Abstract: Techniques for aligning images generated by two cameras are disclosed. This alignment is performed by computing a relative 3D orientation between the two cameras. A first gravity vector for a first camera and a second gravity vector for a second camera are determined. A first camera image is obtained from the first camera, and a second camera image is obtained from the second camera. A first alignment process is performed to partially align the first camera's orientation with the second camera's orientation. This process is performed by aligning the gravity vectors, thereby resulting in two degrees of freedom of the relative 3D orientation being eliminated. Visual correspondences between the two images are identified. A second alignment process is performed to fully align the orientations. This process is performed by using the identified visual correspondences to identify and eliminate a third degree of freedom of the relative 3D orientation.
-
公开(公告)号:US20170257565A1
公开(公告)日:2017-09-07
申请号:US15603568
申请日:2017-05-24
Applicant: Microsoft Technology Licensing, LLC
Inventor: Blaise Aguera y ARCAS , Markus UNGER , Donald A. BARNETT , David Maxwell GEDYE , Sudipta Narayan SINHA , Eric Joel STOLLNITZ , Johannes KOPF
CPC classification number: H04N5/23238 , G06T3/4038
Abstract: One or more techniques and/or systems are provided for ordering images for panorama stitching and/or for providing a focal point indicator for image capture. For example, one or more images, which may be stitched together to create a panorama of a scene, may be stored within an image stack according to one or more ordering preferences, such as where manually captured images are stored within a first/higher priority region of the image stack as compared to automatically captured images. One or more images within the image stack may be stitched according to a stitching order to create the panorama, such as using images in the first region for a foreground of the panorama. Also, a current position of a camera may be tracked and compared with a focal point of a scene to generate a focal point indicator to assist with capturing a new/current image of the scene.
-
公开(公告)号:US20230260204A1
公开(公告)日:2023-08-17
申请号:US18128322
申请日:2023-03-30
Applicant: Microsoft Technology Licensing, LLC
CPC classification number: G06T15/205 , G06T7/33 , G06T7/97
Abstract: Techniques for aligning images generated by two cameras are disclosed. This alignment is performed by computing a relative 3D orientation between the two cameras. A first gravity vector for a first camera and a second gravity vector for a second camera are determined. A first camera image is obtained from the first camera, and a second camera image is obtained from the second camera. A first alignment process is performed to partially align the first camera’s orientation with the second camera’s orientation. This process is performed by aligning the gravity vectors, thereby resulting in two degrees of freedom of the relative 3D orientation being eliminated. Visual correspondences between the two images are identified. A second alignment process is performed to fully align the orientations. This process is performed by using the identified visual correspondences to identify and eliminate a third degree of freedom of the relative 3D orientation.
-
公开(公告)号:US20230076331A1
公开(公告)日:2023-03-09
申请号:US17986445
申请日:2022-11-14
Applicant: Microsoft Technology Licensing, LLC
Abstract: Techniques for updating a position of overlaid image content using IMU data to reflect subsequent changes in camera positions to minimize latency effects are disclosed. A “system camera” refers to an integrated camera that is a part of an HMD. An “external camera” is a camera that is separated from the HMD. The system camera and the external camera generate images. These images are overlaid on one another and aligned to form an overlaid image. Content from the external camera image is surrounded by a bounding element in the overlaid image. IMU data associated with both the system camera and the external camera is obtained. Based on that IMU data, an amount of movement that the system camera and/or the external camera have moved since the images were originally generated is determined. Based on that movement, the bounding element is shifted to a new position in the overlaid image.
-
公开(公告)号:US20220028095A1
公开(公告)日:2022-01-27
申请号:US16936377
申请日:2020-07-22
Applicant: Microsoft Technology Licensing, LLC
Inventor: Michael BLEYER , Christopher Douglas EDMONDS , Michael Edward SAMPLES , Sudipta Narayan SINHA , Matthew Beaudoin KARR , Raymond Kirk PRICE
Abstract: A system for continuous image alignment of separate cameras identifies a reference camera transformation matrix between a base reference camera pose and an updated reference camera pose. The system also identifies a match camera transformation matrix between a base match camera pose and an updated match camera pose and an alignment matrix based on visual correspondences between one or more reference frames captured by the reference camera and one or more match frames captured by the match camera. The system also generates a motion model configured to facilitate mapping of a set of pixels of a reference frame captured by the reference camera to a corresponding set of pixels of a match frame captured by the match camera based on the reference camera transformation matrix, the match camera transformation matrix, and the alignment matrix.
-
公开(公告)号:US20200005486A1
公开(公告)日:2020-01-02
申请号:US16025120
申请日:2018-07-02
Applicant: Microsoft Technology Licensing, LLC
Inventor: Sudipta Narayan SINHA , Pablo Alejandro SPECIALE , Sing Bing KANG , Marc Andre Leon POLLEFEYS
Abstract: Computing devices and methods for estimating a pose of a user computing device are provided. In one example a 3D map comprising a plurality of 3D points representing a physical environment is obtained. Each 3D point is transformed into a 3D line that passes through the point to generate a 3D line cloud. A query image of the environment captured by a user computing device is received, the query image comprising query features that correspond to the environment. Using the 3D line cloud and the query features, a pose of the user computing device with respect to the environment is estimated.
-
公开(公告)号:US20240061251A1
公开(公告)日:2024-02-22
申请号:US18384572
申请日:2023-10-27
Applicant: Microsoft Technology Licensing, LLC
CPC classification number: G02B27/0172 , G06T7/337 , G06T7/248 , G06F3/012 , G06F3/017 , G06T19/006 , G02B2027/0138
Abstract: Techniques for updating a position of overlaid image content using IMU data to reflect subsequent changes in camera positions to minimize latency effects are disclosed. A “system camera” refers to an integrated camera that is a part of an HMD. An “external camera” is a camera that is separated from the HMD. The system camera and the external camera generate images. These images are overlaid on one another and aligned to form an overlaid image. Content from the external camera image is surrounded by a bounding element in the overlaid image. IMU data associated with both the system camera and the external camera is obtained. Based on that IMU data, an amount of movement that the system camera and/or the external camera have moved since the images were originally generated is determined. Based on that movement, the bounding element is shifted to a new position in the overlaid image.
-
-
-
-
-
-
-
-
-