Automatic tagging of objects on a multi-view interactive digital media representation of a dynamic entity

    公开(公告)号:US10852902B2

    公开(公告)日:2020-12-01

    申请号:US16426323

    申请日:2019-05-30

    Applicant: Fyusion, Inc.

    Abstract: Various embodiments of the present disclosure relate generally to systems and methods for automatic tagging of objects on a multi-view interactive digital media representation of a dynamic entity. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a multi-view interactive digital media representation for presentation on a device. Multi-view interactive digital media representations correspond to multi-view interactive digital media representations of the dynamic objects in backgrounds. A first multi-view interactive digital media representation of a dynamic object is obtained. Next, the dynamic object is tagged. Then, a second multi-view interactive digital media representation of the dynamic object is generated. Finally, the dynamic object in the second multi-view interactive digital media representation is automatically identified and tagged.

    Artificially rendering images using viewpoint interpolation and extrapolation

    公开(公告)号:US10726593B2

    公开(公告)日:2020-07-28

    申请号:US14860983

    申请日:2015-09-22

    Applicant: Fyusion, Inc.

    Abstract: Various embodiments of the present invention relate generally to systems and methods for artificially rendering images using viewpoint interpolation and/or extrapolation. According to particular embodiments, a transformation between a first frame and a second frame is estimated, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. An artificially rendered image corresponding to a third location positioned on a trajectory between the first location and the second location is then generated by interpolating a transformation from the first location to the third location and from the third location to the second location and gathering image information from the first frame and the second frame by transferring first image information from the first frame to the third frame based on the interpolated transformation and second image information from the second frame to the third frame based on the interpolated transformation. The first image information and the second image information are then combined. If an occlusion is created by a change in layer placement between the first frame and second frame, this occlusion is detected and missing data is replaced to fill the occlusion.

    Live augmented reality using tracking

    公开(公告)号:US10713851B2

    公开(公告)日:2020-07-14

    申请号:US16186994

    申请日:2018-11-12

    Applicant: Fyusion, Inc.

    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A real object can be tracked in the live image data for the purposes of creating a surround view using a number of tracking points. As a camera is moved around the real object, virtual objects can be rendered into live image data to create synthetic images where a position of the tracking points can be used to position the virtual object in the synthetic image. The synthetic images can be output in real-time. Further, virtual objects in the synthetic images can be incorporated into surround views.

    LIVE AUGMENTED REALITY GUIDES
    8.
    发明申请

    公开(公告)号:US20190392650A1

    公开(公告)日:2019-12-26

    申请号:US16564598

    申请日:2019-09-09

    Applicant: Fyusion, Inc.

    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A visual guide can provided for capturing the multiple images used in the surround view. The visual guide can be a synthetic object that is rendered in real-time into the images output to a display of an image capture device. The visual guide can help user keep the image capture device moving along a desired trajectory.

Patent Agency Ranking