VIRTUAL PHOTOGRAMMETRY
    1.
    发明申请

    公开(公告)号:US20210201576A1

    公开(公告)日:2021-07-01

    申请号:US17204169

    申请日:2021-03-17

    Abstract: Multiple snapshots of a scene are captured within an executing application (e.g., a video game). When each snapshot is captured, associated color values per pixel and a distance or depth value z per pixel are stored. The depth information from the snapshots is accessed, and a point cloud representing the depth information is constructed. A mesh structure is constructed from the point cloud. The light field(s) on the surface(s) of the mesh structure are calculated. A surface light field is represented as a texture. A renderer uses the surface light field with geometry information to reproduce the scene captured in the snapshots. The reproduced scene can be manipulated and viewed from different perspectives.

    Virtual photogrammetry
    2.
    发明授权

    公开(公告)号:US10984587B2

    公开(公告)日:2021-04-20

    申请号:US16434972

    申请日:2019-06-07

    Abstract: Multiple snapshots of a scene are captured within an executing application (e.g., a video game). When each snapshot is captured, associated color values per pixel and a distance or depth value z per pixel are stored. The depth information from the snapshots is accessed, and a point cloud representing the depth information is constructed. A mesh structure is constructed from the point cloud. The light field(s) on the surface(s) of the mesh structure are calculated. A surface light field is represented as a texture. A renderer uses the surface light field with geometry information to reproduce the scene captured in the snapshots. The reproduced scene can be manipulated and viewed from different perspectives.

    Virtual photogrammetry
    3.
    发明授权

    公开(公告)号:US11625894B2

    公开(公告)日:2023-04-11

    申请号:US17204169

    申请日:2021-03-17

    Abstract: Multiple snapshots of a scene are captured within an executing application (e.g., a video game). When each snapshot is captured, associated color values per pixel and a distance or depth value z per pixel are stored. The depth information from the snapshots is accessed, and a point cloud representing the depth information is constructed. A mesh structure is constructed from the point cloud. The light field(s) on the surface(s) of the mesh structure are calculated. A surface light field is represented as a texture. A renderer uses the surface light field with geometry information to reproduce the scene captured in the snapshots. The reproduced scene can be manipulated and viewed from different perspectives.

Patent Agency Ranking