-
31.
公开(公告)号:US20230215108A1
公开(公告)日:2023-07-06
申请号:US17714866
申请日:2022-04-06
Applicant: Samsung Electronics Co., Ltd.
Inventor: Christopher A Peri , Divi Schmidt , Yingen Xiong , Lu Luo
CPC classification number: G06T19/006 , G06T17/205 , G06T15/08 , G06T15/40
Abstract: A system and method for adaptive volume-based scene reconstruction for XR platform application are provided. The system includes an image sensor and a processor to perform the method for display distortion calibration. The method includes determining a processor computation load. The method also includes, based on the determined computation load, adjusting one or more parameters for the 3D scene reconstruction to compensate for the determined computation load. The method further includes rendering a reconstructed 3D scene.
-
公开(公告)号:US20230088963A1
公开(公告)日:2023-03-23
申请号:US17696746
申请日:2022-03-16
Applicant: Samsung Electronics Co., Ltd.
Inventor: Yingen Xiong , Christopher A. Peri
Abstract: A system and method for 3D reconstruction with plane and surface reconstruction, scene parsing, depth reconstruction with depth fusion from different sources. The system includes display and a processor to perform the method for 3D reconstruction with plane and surface reconstruction. The method includes dividing a scene of an image frame into one or more plane regions and one or more surface regions. The method also includes generating reconstructed planes by performing plane reconstruction based on the one or more plane regions. The method also includes generating reconstructed surfaces by performing surface reconstruction based on the one or more surface regions. The method further includes creating the 3D scene reconstruction by integrating the reconstructed planes and the reconstructed surfaces.
-
公开(公告)号:US12154219B2
公开(公告)日:2024-11-26
申请号:US18052827
申请日:2022-11-04
Applicant: Samsung Electronics Co., Ltd.
Inventor: Yingen Xiong , Christopher A. Peri
Abstract: A method of video transformation for a video see-through (VST) augmented reality (AR) device includes obtaining video frames from multiple cameras associated with the VST AR device, where each video frame is associated with position data. The method also includes generating camera viewpoint depth maps associated with the video frames based on the video frames and the position data. The method further includes performing depth re-projection to transform the video frames from camera viewpoints to rendering viewpoints using the camera viewpoint depth maps. The method also includes performing hole filling of one or more holes created in one or more occlusion areas of at least one of the transformed video frames during the depth re-projection to generate at least one hole-filled video frame. In addition, the method includes displaying the transformed video frames including the at least one hole-filled video frame on multiple displays associated with the VST AR device.
-
34.
公开(公告)号:US20240346779A1
公开(公告)日:2024-10-17
申请号:US18630767
申请日:2024-04-09
Applicant: Samsung Electronics Co., Ltd.
Inventor: Yingen Xiong , Christopher A. Peri
IPC: G06T19/00 , G06T5/80 , H04N13/344
CPC classification number: G06T19/006 , G06T5/80 , H04N13/344
Abstract: A method includes determining that an inter-pupillary distance (IPD) between display lenses of a video see-through (VST) extended reality (XR) device has been adjusted with respect to a default IPD. The method also includes obtaining an image captured using a see-through camera of the VST XR device. The see-through camera is configured to capture images of a three-dimensional (3D) scene. The method further includes transforming the image to match a viewpoint of a corresponding one of the display lenses according to a change in IPD with respect to the default IPD in order to generate a transformed image. The method also includes correcting distortions in the transformed image based on one or more lens distortion coefficients corresponding to the change in IPD in order to generate a corrected image. In addition, the method includes initiating presentation of the corrected image on a display panel of the VST XR device.
-
35.
公开(公告)号:US20240223742A1
公开(公告)日:2024-07-04
申请号:US18526726
申请日:2023-12-01
Applicant: Samsung Electronics Co., Ltd.
Inventor: Yingen Xiong , Christopher A. Peri
IPC: H04N13/344 , G06T19/00 , H04N13/128 , H04N13/239
CPC classification number: H04N13/344 , G06T19/006 , H04N13/128 , H04N13/239
Abstract: A method includes obtaining images of a scene captured using a stereo pair of imaging sensors of an XR device and depth data associated with the images, where the scene includes multiple objects. The method also includes obtaining volume-based 3D models of the objects. The method further includes, for one or more first objects, performing depth-based reprojection of the one or more 3D models of the one or more first objects to left and right virtual views based on one or more depths of the one or more first objects. The method also includes, for one or more second objects, performing constant-depth reprojection of the one or more 3D models of the one or more second objects to the left and right virtual views based on a specified depth. In addition, the method includes rendering the left and right virtual views for presentation by the XR device.
-
36.
公开(公告)号:US20240223739A1
公开(公告)日:2024-07-04
申请号:US18360677
申请日:2023-07-27
Applicant: Samsung Electronics Co., Ltd.
Inventor: Yingen Xiong , Christopher A. Peri
IPC: H04N13/128 , G06T19/00
CPC classification number: H04N13/128 , G06T19/006 , H04N2013/0092
Abstract: A method includes obtaining first and second image frames of a scene. The method also includes providing the first image frame as input to an object segmentation model, where the object segmentation model is trained to generate first object segmentation predictions for objects in the scene and a depth or disparity map based on the first image frame. The method further includes generating second object segmentation predictions for the objects in the scene based on the second image frame. The method also includes determining boundaries of the objects in the scene based on the first and second object segmentation predictions. In addition, the method includes generating a virtual view for presentation on a display of an extended reality (XR) device based on the boundaries of the objects in the scene.
-
37.
公开(公告)号:US20240129448A1
公开(公告)日:2024-04-18
申请号:US18353581
申请日:2023-07-17
Applicant: Samsung Electronics Co., Ltd.
Inventor: Yingen Xiong , Christopher A. Peri
IPC: H04N13/261 , G06T3/00 , G06T5/00 , G06T7/593 , G06T7/70 , G06T15/04 , H04N13/271
CPC classification number: H04N13/261 , G06T3/0093 , G06T5/005 , G06T7/593 , G06T7/70 , G06T15/04 , H04N13/271 , G06T2207/20081 , G06T2207/20212 , G06T2207/30244
Abstract: A method includes obtaining a 2D image captured using an imaging sensor. The 2D image is associated with an imaging sensor pose. The method also includes providing the 2D image, the imaging sensor pose, and one or more additional imaging sensor poses to at least one machine learning model that is trained to generate a texture map and a depth map for the imaging sensor pose and each additional imaging sensor pose. The method further includes generating a stereo image pair based on the texture maps and the depth maps. The stereo image pair represents a 2.5D view of the 2D image. The 2.5D view includes a pair of images each including multiple collections of pixels and, for each collection of pixels, a common depth associated with the pixels in the collection of pixels. In addition, the method includes initiating display of the stereo image pair on an XR device.
-
公开(公告)号:US20240121370A1
公开(公告)日:2024-04-11
申请号:US18302622
申请日:2023-04-18
Applicant: Samsung Electronics Co., Ltd.
Inventor: Yingen Xiong
IPC: H04N13/128 , G06T3/00 , G06T7/593 , G06T19/00 , H04N13/111 , H04N13/239 , H04N13/271
CPC classification number: H04N13/128 , G06T3/0093 , G06T7/593 , G06T19/006 , H04N13/111 , H04N13/239 , H04N13/271 , H04N2013/0081
Abstract: A method includes obtaining a stereo image pair including a first image and a second image. The method also includes generating a first feature map of the first image and a second feature map of the second image, the first and second feature maps including extracted positions associated with pixels in the images. The method further includes generating a disparity map between the first and second images based on a dense depth map. The method also includes generating a verified depth map based on a pixelwise comparison of predicted positions and the extracted positions associated with at least some of the pixels in at least one of the images, the predicted positions determined based on the disparity map. In addition, the method includes generating a first virtual view and a second virtual view to present on a display panel of an VST AR device based on the verified depth map.
-
公开(公告)号:US20240073514A1
公开(公告)日:2024-02-29
申请号:US18304235
申请日:2023-04-20
Applicant: Samsung Electronics Co., Ltd.
Inventor: Christopher Anthony Peri , Ravindraraj Mamadgi , Yingen Xiong
CPC classification number: H04N23/64 , G06T7/246 , G06T7/73 , G06V10/44 , G06V10/70 , G06V40/174 , G06V40/23 , G06V40/28 , G06T2207/30201 , G06V2201/07
Abstract: A method includes, in response to initiating a shooting mode of a camera application on an electronic device, collecting sensor information comprising at least one of: motion data of the electronic device, position data of the electronic device, and image data captured by one or more imaging sensors of the electronic device, wherein the shooting mode represents at least one of: a video record mode and an image capture mode. The method also includes determining, using a trained machine learning model, whether a user intention is to record video or capture an image based on features extracted from the sensor information. The method further includes recording video regardless of the shooting mode in response to determining that the user intention is to record video or capturing the image regardless of the shooting mode in response to determining that the user intention is to capture the image.
-
40.
公开(公告)号:US20240062483A1
公开(公告)日:2024-02-22
申请号:US18296095
申请日:2023-04-05
Applicant: Samsung Electronics Co., Ltd.
Inventor: Yingen Xiong
IPC: G06T19/00 , G06V10/74 , H04N13/117 , H04N13/383
CPC classification number: G06T19/006 , G06V10/761 , H04N13/117 , H04N13/383
Abstract: A method includes receiving first and second images from first and second see-through cameras with first and second camera viewpoints. The method also includes generating a first virtual image corresponding to a first virtual viewpoint by applying a first mapping to the first image. The first mapping is based on relative positions of the first camera viewpoint and the first virtual viewpoint corresponding to a first eye of a user. The method further includes generating a second virtual image corresponding to a second virtual viewpoint by applying a second mapping to the second image. The second mapping is based on relative positions of the second camera viewpoint and the second virtual viewpoint corresponding to a second eye of the user. In addition, the method includes presenting the first and second virtual images to the first and second virtual viewpoints on at least one display panel of an augmented reality device.
-
-
-
-
-
-
-
-
-