Generating Immersive Trip Photograph Visualizations

    公开(公告)号:US20210019941A1

    公开(公告)日:2021-01-21

    申请号:US17062079

    申请日:2020-10-02

    申请人: Adobe Inc.

    摘要: A user selects a set of photographs from a trip through an environment that he or she desires to present to other people. A collection of photographs, including the set of photographs captured during the trip optionally augmented with additional photographs obtained from another collection, are combined with a terrain model (e.g., a digital elevation model) to extract information regarding the geographic location of each of the photographs within the environment. The collection of photographs are analyzed, considering their geolocation information as well as the photograph content to register the photographs relative to one another. This information for the photographs is compared to the terrain model in order to accurately position the viewpoint for each photograph within the environment. A presentation of the selected photographs within the environment is generated that displays both the selected photographs and synthetic data filled in beyond the edges of the selected photographs.

    LARGE-SCALE OUTDOOR AUGMENTED REALITY SCENES USING CAMERA POSE BASED ON LEARNED DESCRIPTORS

    公开(公告)号:US20220114365A1

    公开(公告)日:2022-04-14

    申请号:US17068429

    申请日:2020-10-12

    申请人: ADOBE INC.

    IPC分类号: G06K9/00 G06T7/70 G06N20/00

    摘要: Methods and systems are provided for facilitating large-scale augmented reality in relation to outdoor scenes using estimated camera pose information. In particular, camera pose information for an image can be estimated by matching the image to a rendered ground-truth terrain model with known camera pose information. To match images with such renders, data driven cross-domain feature embedding can be learned using a neural network. Cross-domain feature descriptors can be used for efficient and accurate feature matching between the image and the terrain model renders. This feature matching allows images to be localized in relation to the terrain model, which has known camera pose information. This known camera pose information can then be used to estimate camera pose information in relation to the image.

    Generating immersive trip photograph visualizations

    公开(公告)号:US10825246B2

    公开(公告)日:2020-11-03

    申请号:US16144487

    申请日:2018-09-27

    申请人: Adobe Inc.

    摘要: A user selects a set of photographs from a trip through an environment that he or she desires to present to other people. A collection of photographs, including the set of photographs captured during the trip optionally augmented with additional photographs obtained from another collection, are combined with a terrain model (e.g., a digital elevation model) to extract information regarding the geographic location of each of the photographs within the environment. The collection of photographs are analyzed, considering their geolocation information as well as the photograph content to register the photographs relative to one another. This information for the photographs is compared to the terrain model in order to accurately position the viewpoint for each photograph within the environment. A presentation of the selected photographs within the environment is generated that displays both the selected photographs and synthetic data filled in beyond the edges of the selected photographs.

    Generating Immersive Trip Photograph Visualizations

    公开(公告)号:US20200105059A1

    公开(公告)日:2020-04-02

    申请号:US16144487

    申请日:2018-09-27

    申请人: Adobe Inc.

    摘要: A user selects a set of photographs from a trip through an environment that he or she desires to present to other people. A collection of photographs, including the set of photographs captured during the trip optionally augmented with additional photographs obtained from another collection, are combined with a terrain model (e.g., a digital elevation model) to extract information regarding the geographic location of each of the photographs within the environment. The collection of photographs are analyzed, considering their geolocation information as well as the photograph content to register the photographs relative to one another. This information for the photographs is compared to the terrain model in order to accurately position the viewpoint for each photograph within the environment. A presentation of the selected photographs within the environment is generated that displays both the selected photographs and synthetic data filled in beyond the edges of the selected photographs.

    Large-scale outdoor augmented reality scenes using camera pose based on learned descriptors

    公开(公告)号:US11568642B2

    公开(公告)日:2023-01-31

    申请号:US17068429

    申请日:2020-10-12

    申请人: ADOBE INC.

    IPC分类号: G06V20/20 G06N20/00 G06T7/70

    摘要: Methods and systems are provided for facilitating large-scale augmented reality in relation to outdoor scenes using estimated camera pose information. In particular, camera pose information for an image can be estimated by matching the image to a rendered ground-truth terrain model with known camera pose information. To match images with such renders, data driven cross-domain feature embedding can be learned using a neural network. Cross-domain feature descriptors can be used for efficient and accurate feature matching between the image and the terrain model renders. This feature matching allows images to be localized in relation to the terrain model, which has known camera pose information. This known camera pose information can then be used to estimate camera pose information in relation to the image.

    Generating immersive trip photograph visualizations

    公开(公告)号:US11113882B2

    公开(公告)日:2021-09-07

    申请号:US17062079

    申请日:2020-10-02

    申请人: Adobe Inc.

    摘要: A user selects a set of photographs from a trip through an environment that he or she desires to present to other people. A collection of photographs, including the set of photographs captured during the trip optionally augmented with additional photographs obtained from another collection, are combined with a terrain model (e.g., a digital elevation model) to extract information regarding the geographic location of each of the photographs within the environment. The collection of photographs are analyzed, considering their geolocation information as well as the photograph content to register the photographs relative to one another. This information for the photographs is compared to the terrain model in order to accurately position the viewpoint for each photograph within the environment. A presentation of the selected photographs within the environment is generated that displays both the selected photographs and synthetic data filled in beyond the edges of the selected photographs.