DYNAMIC ALIGNMENT BETWEEN SEE-THROUGH CAMERAS AND EYE VIEWPOINTS IN VIDEO SEE-THROUGH (VST) EXTENDED REALITY (XR)

    公开(公告)号:US20240346779A1

    公开(公告)日:2024-10-17

    申请号:US18630767

    申请日:2024-04-09

    CPC classification number: G06T19/006 G06T5/80 H04N13/344

    Abstract: A method includes determining that an inter-pupillary distance (IPD) between display lenses of a video see-through (VST) extended reality (XR) device has been adjusted with respect to a default IPD. The method also includes obtaining an image captured using a see-through camera of the VST XR device. The see-through camera is configured to capture images of a three-dimensional (3D) scene. The method further includes transforming the image to match a viewpoint of a corresponding one of the display lenses according to a change in IPD with respect to the default IPD in order to generate a transformed image. The method also includes correcting distortions in the transformed image based on one or more lens distortion coefficients corresponding to the change in IPD in order to generate a corrected image. In addition, the method includes initiating presentation of the corrected image on a display panel of the VST XR device.

    DEPTH-VARYING REPROJECTION PASSTHROUGH IN VIDEO SEE-THROUGH (VST) EXTENDED REALITY (XR)

    公开(公告)号:US20240223742A1

    公开(公告)日:2024-07-04

    申请号:US18526726

    申请日:2023-12-01

    CPC classification number: H04N13/344 G06T19/006 H04N13/128 H04N13/239

    Abstract: A method includes obtaining images of a scene captured using a stereo pair of imaging sensors of an XR device and depth data associated with the images, where the scene includes multiple objects. The method also includes obtaining volume-based 3D models of the objects. The method further includes, for one or more first objects, performing depth-based reprojection of the one or more 3D models of the one or more first objects to left and right virtual views based on one or more depths of the one or more first objects. The method also includes, for one or more second objects, performing constant-depth reprojection of the one or more 3D models of the one or more second objects to the left and right virtual views based on a specified depth. In addition, the method includes rendering the left and right virtual views for presentation by the XR device.

    MASK GENERATION WITH OBJECT AND SCENE SEGMENTATION FOR PASSTHROUGH EXTENDED REALITY (XR)

    公开(公告)号:US20240223739A1

    公开(公告)日:2024-07-04

    申请号:US18360677

    申请日:2023-07-27

    CPC classification number: H04N13/128 G06T19/006 H04N2013/0092

    Abstract: A method includes obtaining first and second image frames of a scene. The method also includes providing the first image frame as input to an object segmentation model, where the object segmentation model is trained to generate first object segmentation predictions for objects in the scene and a depth or disparity map based on the first image frame. The method further includes generating second object segmentation predictions for the objects in the scene based on the second image frame. The method also includes determining boundaries of the objects in the scene based on the first and second object segmentation predictions. In addition, the method includes generating a virtual view for presentation on a display of an extended reality (XR) device based on the boundaries of the objects in the scene.

    System and method for depth map guided image hole filling

    公开(公告)号:US11670063B2

    公开(公告)日:2023-06-06

    申请号:US17463037

    申请日:2021-08-31

    Abstract: An electronic device that reprojects two-dimensional (2D) images to three-dimensional (3D) images includes a memory configured to store instructions, and a processor configured to execute the instructions to: propagate an intensity for at least one pixel of an image based on a depth guide of neighboring pixels of the at least one pixel, wherein the at least one pixel is considered a hole during 2D to 3D image reprojection; propagate a color for the at least one pixel based on an intensity guide of the neighboring pixels of the at least one pixel; and compute at least one weight for the at least one pixel based on the intensity and color propagation.

    IMAGE-GUIDED DEPTH PROPAGATION FOR SPACE-WARPING IMAGES

    公开(公告)号:US20220292631A1

    公开(公告)日:2022-09-15

    申请号:US17402005

    申请日:2021-08-13

    Abstract: Updating an image during real-time rendering of images by a display device can include determining a depth for each pixel of a color frame received from a source device and corresponding to the image. Each pixel's depth is determined by image-guided propagation of depths of sparse points extracted from a depth map generated at the source device. With respect to pixels corresponding to an extracted sparse depth point, image-guided depth propagation can include retaining the depth of the corresponding sparse depth point unchanged from the source depth map. With respect to each pixel corresponding to a non-sparse depth point, image-guided depth propagation can include propagating to the corresponding non-sparse depth point a depth of a sparse depth point lying within a neighborhood of the non-sparse depth point. Pixel coordinates of the color frame can be transformed for generating a space-warped rendering of the image.

    SYSTEM AND METHOD FOR REDUCED COMMUNICATION LOAD THROUGH LOSSLESS DATA REDUCTION

    公开(公告)号:US20210311307A1

    公开(公告)日:2021-10-07

    申请号:US16990779

    申请日:2020-08-11

    Abstract: A method includes obtaining, from a memory of an electronic device connected to a head mounted display (HMD), a first reference frame, wherein the first reference frame comprises a first set of pixels associated with a first time. The method includes, rendering, at the electronic device, a source image as a new frame, wherein the new frame includes a second set of pixels associated with a display to be provided by the HMD at a second time, and generating, by the electronic device, a differential frame, wherein the differential frame is based on a difference operation between pixels of the new frame with pixels of the first reference frame to identify pixels unique to the new frame. Still further, the method includes sending the differential frame to the HMD, and storing the new frame in the memory of the electronic device as a second reference frame.

    System and method for coded pattern communication

    公开(公告)号:US10691767B2

    公开(公告)日:2020-06-23

    申请号:US16362375

    申请日:2019-03-22

    Abstract: An apparatus includes at least one image sensor configured to capture a plurality of images, at least one communication interface, at least one display, and at least one processor coupled to the at least one image sensor, the at least one communication interface, and the at least one display. The at least one processor is configured to identify a network address encoded into one or more first images of the plurality of captured images and transmit a first request to the network address. The at least one processor is also configured to identify a unique identifier encoded into one or more second images of the plurality of captured images and transmit a second request containing the unique identifier. The at least one processor is further configured to receive session information and output, to the at least one display, extended reality-related content obtained during a session associated with the session information.

    SYSTEM AND METHOD FOR CODED PATTERN COMMUNICATION

    公开(公告)号:US20200142942A1

    公开(公告)日:2020-05-07

    申请号:US16362375

    申请日:2019-03-22

    Abstract: An apparatus includes at least one image sensor configured to capture a plurality of images, at least one communication interface, at least one display, and at least one processor coupled to the at least one image sensor, the at least one communication interface, and the at least one display. The at least one processor is configured to identify a network address encoded into one or more first images of the plurality of captured images and transmit a first request to the network address. The at least one processor is also configured to identify a unique identifier encoded into one or more second images of the plurality of captured images and transmit a second request containing the unique identifier. The at least one processor is further configured to receive session information and output, to the at least one display, extended reality-related content obtained during a session associated with the session information.

    System and method for optical tracking

    公开(公告)号:US10304207B2

    公开(公告)日:2019-05-28

    申请号:US15805750

    申请日:2017-11-07

    Abstract: A method, electronic device, and non-transitory computer readable medium for optical tracking are provided. The method includes identifying at least one object within an environment, from received image data. The method also includes assigning a volatility rating to the identified object. Additionally, the method includes determining one or more tracking points associated with the identified object. The method also includes determining a priority rating for the each of the one or more tracking points associated with the identified object based on the assigned volatility rating of the identified object. The method also includes selecting the one or more tracking points for optical tracking based on the priority rating.

Patent Agency Ranking