-
公开(公告)号:US11961184B2
公开(公告)日:2024-04-16
申请号:US17696746
申请日:2022-03-16
Applicant: Samsung Electronics Co., Ltd.
Inventor: Yingen Xiong , Christopher A. Peri
CPC classification number: G06T17/20 , G06T7/12 , G06T19/006 , G06T2200/04 , G06T2200/08 , G06T2207/10012 , G06T2207/20021
Abstract: A system and method for 3D reconstruction with plane and surface reconstruction, scene parsing, depth reconstruction with depth fusion from different sources. The system includes display and a processor to perform the method for 3D reconstruction with plane and surface reconstruction. The method includes dividing a scene of an image frame into one or more plane regions and one or more surface regions. The method also includes generating reconstructed planes by performing plane reconstruction based on the one or more plane regions. The method also includes generating reconstructed surfaces by performing surface reconstruction based on the one or more surface regions. The method further includes creating the 3D scene reconstruction by integrating the reconstructed planes and the reconstructed surfaces.
-
22.
公开(公告)号:US11960345B2
公开(公告)日:2024-04-16
申请号:US17750960
申请日:2022-05-23
Applicant: Samsung Electronics Co., Ltd.
Inventor: Christopher A. Peri , Moiz Kaizar Sonasath , Lu Luo
IPC: G06F1/329 , G06F1/3212 , G06F1/3218
CPC classification number: G06F1/329 , G06F1/3212 , G06F1/3218
Abstract: A method includes obtaining a request for one of multiple operational modes from an application installed on an extended reality (XR) device or an XR runtime/renderer of the XR device. The method also includes selecting a first mode of the operational modes, based at least partly on a real-time system performance of the XR device. The method also includes publishing the selected first mode to the XR runtime/renderer or the application. The method also includes performing a task related to at least one of image rendering or computer vision calculations for the application, using an algorithm associated with the selected first mode.
-
公开(公告)号:US20240046583A1
公开(公告)日:2024-02-08
申请号:US18353579
申请日:2023-07-17
Applicant: Samsung Electronics Co., Ltd.
Inventor: Yingen Xiong , Christopher A. Peri
CPC classification number: G06T19/006 , G06T7/73 , G06T7/593 , G06T2207/30244 , G06T2207/10021 , G06T2207/20224
Abstract: A method includes obtaining images of a scene and corresponding position data of a device that captures the images. The method also includes determining position data and direction data associated with camera rays passing through keyframes of the images. The method further includes using a position-dependent multilayer perceptron (MLP) and a direction-dependent MLP to create sparse feature vectors. The method also includes storing the sparse feature vectors in at least one data structure. The method further includes receiving a request to render the scene on an augmented reality (AR) device associated with a viewing direction. In addition, the method includes rendering the scene associated with the viewing direction using the sparse feature vectors in the at least one data structure.
-
24.
公开(公告)号:US20230140170A1
公开(公告)日:2023-05-04
申请号:US17811028
申请日:2022-07-06
Applicant: Samsung Electronics Co., Ltd.
Inventor: Yingen Xiong , Christopher A. Peri
IPC: G06T19/00 , G06T7/593 , G06T7/73 , G06F3/01 , H04N13/239
Abstract: A method includes obtaining first and second image data of a real-world scene, performing feature extraction to obtain first and second feature maps, and performing pose tracking based on at least one of the first image data, second image data, and pose data to obtain a 6DOF pose of an apparatus. The method also includes generating, based on the 6DOF pose, first feature map, and second feature map, a disparity map between the image data and generating an initial depth map based on the disparity map. The method further includes generating a dense depth map based on the initial depth map and a camera model and generating, based on the dense depth map, a three-dimensional reconstruction of at least pail of the scene. In addition, the method includes rendering an AR or XR display that includes one or more virtual objects positioned to contact one or more surfaces of the reconstruction.
-
公开(公告)号:US20230092248A1
公开(公告)日:2023-03-23
申请号:US17805828
申请日:2022-06-07
Applicant: Samsung Electronics Co., Ltd.
Inventor: Yingen Xiong , Christopher A. Peri
Abstract: A method includes obtaining, from an image sensor, image data of a real-world scene; obtaining, from a depth sensor, sparse depth data of the real-world scene; and passing the image data to a first neural network to obtain one or more object regions of interest (ROIs) and one or more feature map ROIs. Each object ROI includes at least one detected object. The method also includes passing the image data and sparse depth data to a second neural network to obtain one or more dense depth map ROIs; aligning the one or more object ROIs, one or more feature map ROIs, and one or more dense depth map ROIs; and passing the aligned ROIs to a fully convolutional network to obtain a segmentation of the real-world scene. The segmentation contains one or more pixelwise predictions of one or more objects in the real-world scene.
-
公开(公告)号:US11468587B2
公开(公告)日:2022-10-11
申请号:US17093519
申请日:2020-11-09
Applicant: Samsung Electronics Co., Ltd.
Inventor: Christopher A. Peri , Yingen Xiong
Abstract: A method for reconstructing a downsampled depth map includes receiving, at an electronic device, image data to be presented on a display of the electronic device at a first resolution, wherein the image data includes a color image and the downsampled depth map associated with the color image. The method further includes generating a high resolution depth map by calculating, for each point making up the first resolution, a depth value based on a normalized pose difference across a neighborhood of points for the point, a normalized color texture difference across the neighborhood of points for the point, and a normalized spatial difference across the neighborhood of points. Still further, the method includes outputting, on the display, a reprojected image at the first resolution based on the color image and the high resolution depth map. The downsampled depth map is at a resolution less than the first resolution.
-
公开(公告)号:US20210358158A1
公开(公告)日:2021-11-18
申请号:US17093519
申请日:2020-11-09
Applicant: Samsung Electronics Co., Ltd.
Inventor: Christopher A. Peri , Yingen Xiong
Abstract: A method for reconstructing a downsampled depth map includes receiving, at an electronic device, image data to be presented on a display of the electronic device at a first resolution, wherein the image data includes a color image and the downsampled depth map associated with the color image. The method further includes generating a high resolution depth map by calculating, for each point making up the first resolution, a depth value based on a normalized pose difference across a neighborhood of points for the point, a normalized color texture difference across the neighborhood of points for the point, and a normalized spatial difference across the neighborhood of points. Still further, the method includes outputting, on the display, a reprojected image at the first resolution based on the color image and the high resolution depth map. The downsampled depth map is at a resolution less than the first resolution.
-
公开(公告)号:US10523918B2
公开(公告)日:2019-12-31
申请号:US15830832
申请日:2017-12-04
Applicant: Samsung Electronics Co., Ltd
Inventor: Christopher A. Peri
IPC: H04N13/271 , H04N5/232 , H04N5/247 , H04N13/106 , H04N13/296 , H04N13/239
Abstract: A method, electronic device, and non-transitory computer readable medium for transmitting information is provided. The method includes receiving, from each of two 360-degree cameras, image data. The method also includes synchronizing the received image data from each of the two cameras. Additionally, the method includes creating a depth map from the received the image data based in part on a distance between the two cameras. The method also includes generating multi-dimensional content by combining the created depth map with the synchronized image data of at least one of the two cameras.
-
公开(公告)号:US20190012792A1
公开(公告)日:2019-01-10
申请号:US15854620
申请日:2017-12-26
Applicant: Samsung Electronics Co., Ltd.
Inventor: Christopher A. Peri
Abstract: A method for tracking a position of a device is provided, wherein the method includes capturing, at a first positional resolution, based on information from a first sensor, a first position of the device within an optical tracking zone of the first sensor. The method also includes determining, based on information from the first sensor, that the device exits the optical tracking zone of the first sensor. Further, the method includes responsive to determining that the device exits the optical tracking zone of the first sensor, capturing, at a second positional resolution, a second position of the device based on acceleration information from a second sensor, wherein the second positional resolution corresponds to a minimum threshold value for the acceleration information from the second sensor.
-
公开(公告)号:US12154219B2
公开(公告)日:2024-11-26
申请号:US18052827
申请日:2022-11-04
Applicant: Samsung Electronics Co., Ltd.
Inventor: Yingen Xiong , Christopher A. Peri
Abstract: A method of video transformation for a video see-through (VST) augmented reality (AR) device includes obtaining video frames from multiple cameras associated with the VST AR device, where each video frame is associated with position data. The method also includes generating camera viewpoint depth maps associated with the video frames based on the video frames and the position data. The method further includes performing depth re-projection to transform the video frames from camera viewpoints to rendering viewpoints using the camera viewpoint depth maps. The method also includes performing hole filling of one or more holes created in one or more occlusion areas of at least one of the transformed video frames during the depth re-projection to generate at least one hole-filled video frame. In addition, the method includes displaying the transformed video frames including the at least one hole-filled video frame on multiple displays associated with the VST AR device.
-
-
-
-
-
-
-
-
-