REAL-TIME PHOTOREALISTIC VIEW RENDERING ON AUGMENTED REALITY (AR) DEVICE

    公开(公告)号:US20240046583A1

    公开(公告)日:2024-02-08

    申请号:US18353579

    申请日:2023-07-17

    Abstract: A method includes obtaining images of a scene and corresponding position data of a device that captures the images. The method also includes determining position data and direction data associated with camera rays passing through keyframes of the images. The method further includes using a position-dependent multilayer perceptron (MLP) and a direction-dependent MLP to create sparse feature vectors. The method also includes storing the sparse feature vectors in at least one data structure. The method further includes receiving a request to render the scene on an augmented reality (AR) device associated with a viewing direction. In addition, the method includes rendering the scene associated with the viewing direction using the sparse feature vectors in the at least one data structure.

    SYSTEM AND METHOD FOR DEPTH AND SCENE RECONSTRUCTION FOR AUGMENTED REALITY OR EXTENDED REALITY DEVICES

    公开(公告)号:US20230140170A1

    公开(公告)日:2023-05-04

    申请号:US17811028

    申请日:2022-07-06

    Abstract: A method includes obtaining first and second image data of a real-world scene, performing feature extraction to obtain first and second feature maps, and performing pose tracking based on at least one of the first image data, second image data, and pose data to obtain a 6DOF pose of an apparatus. The method also includes generating, based on the 6DOF pose, first feature map, and second feature map, a disparity map between the image data and generating an initial depth map based on the disparity map. The method further includes generating a dense depth map based on the initial depth map and a camera model and generating, based on the dense depth map, a three-dimensional reconstruction of at least pail of the scene. In addition, the method includes rendering an AR or XR display that includes one or more virtual objects positioned to contact one or more surfaces of the reconstruction.

    METHOD AND APPARATUS FOR SCENE SEGMENTATION FOR THREE-DIMENSIONAL SCENE RECONSTRUCTION

    公开(公告)号:US20230092248A1

    公开(公告)日:2023-03-23

    申请号:US17805828

    申请日:2022-06-07

    Abstract: A method includes obtaining, from an image sensor, image data of a real-world scene; obtaining, from a depth sensor, sparse depth data of the real-world scene; and passing the image data to a first neural network to obtain one or more object regions of interest (ROIs) and one or more feature map ROIs. Each object ROI includes at least one detected object. The method also includes passing the image data and sparse depth data to a second neural network to obtain one or more dense depth map ROIs; aligning the one or more object ROIs, one or more feature map ROIs, and one or more dense depth map ROIs; and passing the aligned ROIs to a fully convolutional network to obtain a segmentation of the real-world scene. The segmentation contains one or more pixelwise predictions of one or more objects in the real-world scene.

    System and method for depth map recovery

    公开(公告)号:US11468587B2

    公开(公告)日:2022-10-11

    申请号:US17093519

    申请日:2020-11-09

    Abstract: A method for reconstructing a downsampled depth map includes receiving, at an electronic device, image data to be presented on a display of the electronic device at a first resolution, wherein the image data includes a color image and the downsampled depth map associated with the color image. The method further includes generating a high resolution depth map by calculating, for each point making up the first resolution, a depth value based on a normalized pose difference across a neighborhood of points for the point, a normalized color texture difference across the neighborhood of points for the point, and a normalized spatial difference across the neighborhood of points. Still further, the method includes outputting, on the display, a reprojected image at the first resolution based on the color image and the high resolution depth map. The downsampled depth map is at a resolution less than the first resolution.

    SYSTEM AND METHOD FOR DEPTH MAP RECOVERY

    公开(公告)号:US20210358158A1

    公开(公告)日:2021-11-18

    申请号:US17093519

    申请日:2020-11-09

    Abstract: A method for reconstructing a downsampled depth map includes receiving, at an electronic device, image data to be presented on a display of the electronic device at a first resolution, wherein the image data includes a color image and the downsampled depth map associated with the color image. The method further includes generating a high resolution depth map by calculating, for each point making up the first resolution, a depth value based on a normalized pose difference across a neighborhood of points for the point, a normalized color texture difference across the neighborhood of points for the point, and a normalized spatial difference across the neighborhood of points. Still further, the method includes outputting, on the display, a reprojected image at the first resolution based on the color image and the high resolution depth map. The downsampled depth map is at a resolution less than the first resolution.

    System and method for depth map
    28.
    发明授权

    公开(公告)号:US10523918B2

    公开(公告)日:2019-12-31

    申请号:US15830832

    申请日:2017-12-04

    Abstract: A method, electronic device, and non-transitory computer readable medium for transmitting information is provided. The method includes receiving, from each of two 360-degree cameras, image data. The method also includes synchronizing the received image data from each of the two cameras. Additionally, the method includes creating a depth map from the received the image data based in part on a distance between the two cameras. The method also includes generating multi-dimensional content by combining the created depth map with the synchronized image data of at least one of the two cameras.

    SYSTEM AND METHODS FOR DEVICE TRACKING
    29.
    发明申请

    公开(公告)号:US20190012792A1

    公开(公告)日:2019-01-10

    申请号:US15854620

    申请日:2017-12-26

    Abstract: A method for tracking a position of a device is provided, wherein the method includes capturing, at a first positional resolution, based on information from a first sensor, a first position of the device within an optical tracking zone of the first sensor. The method also includes determining, based on information from the first sensor, that the device exits the optical tracking zone of the first sensor. Further, the method includes responsive to determining that the device exits the optical tracking zone of the first sensor, capturing, at a second positional resolution, a second position of the device based on acceleration information from a second sensor, wherein the second positional resolution corresponds to a minimum threshold value for the acceleration information from the second sensor.

    Method and system for video transformation for video see-through augmented reality

    公开(公告)号:US12154219B2

    公开(公告)日:2024-11-26

    申请号:US18052827

    申请日:2022-11-04

    Abstract: A method of video transformation for a video see-through (VST) augmented reality (AR) device includes obtaining video frames from multiple cameras associated with the VST AR device, where each video frame is associated with position data. The method also includes generating camera viewpoint depth maps associated with the video frames based on the video frames and the position data. The method further includes performing depth re-projection to transform the video frames from camera viewpoints to rendering viewpoints using the camera viewpoint depth maps. The method also includes performing hole filling of one or more holes created in one or more occlusion areas of at least one of the transformed video frames during the depth re-projection to generate at least one hole-filled video frame. In addition, the method includes displaying the transformed video frames including the at least one hole-filled video frame on multiple displays associated with the VST AR device.

Patent Agency Ranking