-
公开(公告)号:US20200349684A1
公开(公告)日:2020-11-05
申请号:US16549176
申请日:2019-08-23
Applicant: Samsung Electronics Co., Ltd. , Industry-Academic Cooperation Foundation, Yonsei University
Inventor: Hwiryong JUNG , Moon Gi KANG , Seung Hoon JEE , Min Sub KIM , Wooshik KIM , Hyewon MOON , Keechang LEE , Sunghoon HONG
Abstract: An image processing method acquires an image, restores a saturated region in which a pixel in the image has a first reference value based on a first illuminance component of the image, enhances a dark region in which a value of a pixel in the image is less than a second reference value based on the restored saturated region and the first illuminance component, and outputs a dark region-enhanced image.
-
公开(公告)号:US20240054716A1
公开(公告)日:2024-02-15
申请号:US18096972
申请日:2023-01-13
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Seokhwan Jang , Nahyup KANG , Jiyeon KIM , Hyewon MOON , Donghoon SAGONG , Minjung SON
Abstract: Disclosed are a method and device for representing rendered scenes. A data processing method of training a neural network model includes obtaining spatial information of sampling data, obtaining one or more volume-rendering parameters by inputting the spatial information of the sampling data to the neural network model, obtaining a regularization term based on a distribution of the volume-rendering parameters, performing volume rendering based on the volume-rendering parameters, and training the neural network model to minimize a loss function determined based on the regularization term and based on a difference between a ground truth image and an image that is estimated according to the volume rendering.
-
公开(公告)号:US20240257503A1
公开(公告)日:2024-08-01
申请号:US18350311
申请日:2023-07-11
Applicant: Samsung Electronics Co., Ltd.
Inventor: Hyewon MOON , Nahyup KANG , Jiyeon KIM , Donghoon SAGONG , Seokhwan JANG
CPC classification number: G06V10/774 , G06T7/50 , G06T7/70 , G06T7/90 , G06T17/00 , G06V10/82 , G06T2207/30244
Abstract: An image processing method and an image processing apparatus are provided. The image processing method includes: receiving a camera pose of a camera corresponding to a target scene; generating a piece of prediction information including either a color of an object included in the target scene or a density of the object, wherein the prediction information is generated by applying, to a neural network model, three-dimensional (3D) points on a camera ray formed based on the camera pose; sampling, among the 3D points, target points corresponding to a static object, wherein the sampling is based on the piece of prediction information; and outputting a rendered image corresponding to the target scene by projecting a pixel value corresponding to the target points onto the target scene and rendering the target scene onto which the pixel value may be projected.
-
公开(公告)号:US20240135634A1
公开(公告)日:2024-04-25
申请号:US18113823
申请日:2023-02-24
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Donghoon SAGONG , Nahyup KANG , Jiyeon KIM , Hyewon MOON , Seokhwan JANG
CPC classification number: G06T15/205 , G06T7/73 , G06T15/06
Abstract: A device including a processor configured to generate, for each of plural query inputs, point information using factors individually extracted from a plurality of pieces of factor data for a corresponding query input and generate pixel information of a pixel position using the point information of points, the plural query inputs being of the points, in a 3D space, on a view direction from a viewpoint toward a pixel position of a two-dimensional (2D) scene.
-
公开(公告)号:US20230114734A1
公开(公告)日:2023-04-13
申请号:US17699657
申请日:2022-03-21
Applicant: Samsung Electronics Co., Ltd.
Inventor: Hyewon MOON , Jiyeon KIM , Minjung SON
Abstract: A method with global localization includes: extracting a feature by applying an input image to a first network; estimating a coordinate map corresponding to the input image by applying the extracted feature to a second network; and estimating a pose corresponding to the input image based on the estimated coordinate map, wherein either one or both of the first network and the second network is trained based on either one or both of: a first generative adversarial network (GAN) loss determined based on a first feature extracted by the first network based on a synthetic image determined by three-dimensional (3D) map data and a second feature extracted by the first network based on a real image; and a second GAN loss determined based on a first coordinate map estimated by the second network based on the first feature and a second coordinate map estimated by the second network based on the second feature.
-
-
-
-