System and method for generating a three-dimensional photographic image

    公开(公告)号:US12198247B2

    公开(公告)日:2025-01-14

    申请号:US17806029

    申请日:2022-06-08

    Abstract: A method includes receiving, from a camera, one or more frames of image data of a scene comprising a background and one or more three-dimensional objects, wherein each frame comprises a raster of pixels of image data; detecting layer information of the scene, wherein the layer information is associated with a depth-based distribution of the pixels in the one or more frames; and determining a multi-layer model for the scene, the multi-layer model comprising a plurality of discrete layers comprising first and second discrete layers, wherein each discrete layer is associated with a unique depth value relative to the camera. The method further includes mapping the pixels to the layers of the plurality of discrete layers; rendering the pixels as a first image of the scene as viewed from a first perspective; and rendering the pixels as a second image of the scene as viewed from a second perspective.

    SYSTEM AND METHOD FOR DISOCCLUDED REGION COMPLETION FOR VIDEO RENDERING IN VIDEO SEE-THROUGH (VST) AUGMENTED REALITY (AR)

    公开(公告)号:US20240078765A1

    公开(公告)日:2024-03-07

    申请号:US18353610

    申请日:2023-07-17

    Inventor: Yingen Xiong

    CPC classification number: G06T19/006 G06F3/14 G06T5/005 G06V10/25

    Abstract: A method includes generating a virtual view image and a virtual depth map based on an image captured using a see-through camera and a corresponding depth map. The virtual view image and the virtual depth map include holes for which image data or depth data cannot be determined. The method also includes searching one or more previous images to locate a region in at least one previous image that includes missing pixels in the holes. The method further includes at least partially filling the holes in the virtual view image and the virtual depth map with image data and depth data associated with the located region to generate a filled virtual view image and a filled virtual depth map. In addition, the method includes generating a virtual view to present on a display panel of a VST AR device using the filled virtual view image and the filled virtual depth map.

    Reconstructing A Three-Dimensional Scene
    5.
    发明公开

    公开(公告)号:US20230245322A1

    公开(公告)日:2023-08-03

    申请号:US17875429

    申请日:2022-07-28

    Abstract: In one embodiment, a method includes identifying, in each image of a stereoscopic pair of images of a scene at a particular time, every pixel as either a static pixel corresponding to a portion of a scene that does not have local motion at that time or a dynamic pixel corresponding to a portion of a scene that has local motion at that time. For each static pixel, the method includes comparing each of a plurality of depth calculations for the pixel, and when the depth calculations differ by at least a threshold amount, then re-labeling that pixel as a dynamic pixel. For each dynamic pixel, the method includes comparing a geometric 3D calculation for the pixel with a temporal 3D calculation for that pixel, and when the geometric 3D calculation and the temporal 3D calculation are within a threshold amount, then re-labeling the pixel as a static pixel.

    SYSTEM AND METHOD FOR OPTICAL CALIBRATION OF A HEAD-MOUNTED DISPLAY

    公开(公告)号:US20230137199A1

    公开(公告)日:2023-05-04

    申请号:US17696729

    申请日:2022-03-16

    Abstract: A system and method for display distortion calibration are configured to capture distortion with image patterns and calibrate distortion with ray tracing for an optical pipeline with lenses. The system includes an image sensor and a processor to perform the method for display distortion calibration. The method includes generating an image pattern to encode display image pixels by encoding display distortion associated with a plurality of image patterns. The method also includes determining a distortion of the image pattern resulting from a lens on a head-mounted display (HMD) and decoding the distorted image patterns to obtain distortion of pixels on a display. A lookup table is created of angular distortion of all the pixels on the display. The method further includes providing a compensation factor for the distortion by creating distortion correction based on the lookup table of angular distortion.

    REAL-TIME PHOTOREALISTIC VIEW RENDERING ON AUGMENTED REALITY (AR) DEVICE

    公开(公告)号:US20240046583A1

    公开(公告)日:2024-02-08

    申请号:US18353579

    申请日:2023-07-17

    Abstract: A method includes obtaining images of a scene and corresponding position data of a device that captures the images. The method also includes determining position data and direction data associated with camera rays passing through keyframes of the images. The method further includes using a position-dependent multilayer perceptron (MLP) and a direction-dependent MLP to create sparse feature vectors. The method also includes storing the sparse feature vectors in at least one data structure. The method further includes receiving a request to render the scene on an augmented reality (AR) device associated with a viewing direction. In addition, the method includes rendering the scene associated with the viewing direction using the sparse feature vectors in the at least one data structure.

    SYSTEM AND METHOD FOR DEPTH AND SCENE RECONSTRUCTION FOR AUGMENTED REALITY OR EXTENDED REALITY DEVICES

    公开(公告)号:US20230140170A1

    公开(公告)日:2023-05-04

    申请号:US17811028

    申请日:2022-07-06

    Abstract: A method includes obtaining first and second image data of a real-world scene, performing feature extraction to obtain first and second feature maps, and performing pose tracking based on at least one of the first image data, second image data, and pose data to obtain a 6DOF pose of an apparatus. The method also includes generating, based on the 6DOF pose, first feature map, and second feature map, a disparity map between the image data and generating an initial depth map based on the disparity map. The method further includes generating a dense depth map based on the initial depth map and a camera model and generating, based on the dense depth map, a three-dimensional reconstruction of at least pail of the scene. In addition, the method includes rendering an AR or XR display that includes one or more virtual objects positioned to contact one or more surfaces of the reconstruction.

Patent Agency Ranking