-
21.
公开(公告)号:US20170032567A1
公开(公告)日:2017-02-02
申请号:US15076817
申请日:2016-03-22
Applicant: Samsung Electronics Co., Ltd.
Inventor: Seungin PARK , Minsu AHN , Minjung SON , Hyong Euk LEE , Inwoo HA
CPC classification number: G06T15/80 , G06T15/005
Abstract: A three-dimensional (3D) rendering method and apparatus is disclosed. The 3D rendering apparatus determines a vertex for a first shading from among vertices of a 3D model based on characteristic information of the 3D model, performs the first shading on the determined vertex, determines a pixel area for a second shading based on reference information indicating whether the first shading is applied to at least one vertex comprising the pixel area, performs the second shading on the determined pixel area, and generates a rendered image based on the first shading and the second shading.
Abstract translation: 公开了一种三维(3D)渲染方法和装置。 3D渲染装置基于3D模型的特征信息,从3D模型的顶点确定用于第一着色的顶点,对所确定的顶点执行第一着色,基于指示是否存在第二着色的像素区域 将第一阴影应用于包括像素区域的至少一个顶点,对所确定的像素区域执行第二阴影,并且基于第一阴影和第二阴影生成渲染图像。
-
公开(公告)号:US20240144584A1
公开(公告)日:2024-05-02
申请号:US18357400
申请日:2023-07-24
Applicant: Samsung Electronics Co., Ltd. , The Board of Trustees of the Leland Stanford Junior University
Inventor: Minjung SON , Jeong Joon PARK , Gordon WETZSTEIN
Abstract: A method of training a neural network model to generate a three-dimensional (3D) model of a scene includes: generating the 3D model based on a latent code; based on the 3D model, sampling a camera view including a camera position and a camera angle corresponding to the 3D model of the scene; generating a two-dimensional (2D) image based on the 3D model and the sampled camera view; and training the neural network model to, using the 3D model, generate a scene corresponding to the sampled camera view based on the generated 2D image and a real 2D image.
-
23.
公开(公告)号:US20240127573A1
公开(公告)日:2024-04-18
申请号:US18312754
申请日:2023-05-05
Applicant: Samsung Electronics Co., Ltd.
Inventor: Sungheon PARK , Minjung SON , Nahyup KANG , Jiyeon KIM , Seokhwan JANG
CPC classification number: G06V10/46 , G06T3/4007 , G06V10/87
Abstract: An electronic device, from point information and time information, extracts a plurality of pieces of feature data from a plurality of feature extraction models, obtains spacetime feature data based on interpolation of the pieces of feature data, and generate scene information on the target point at the target time instant from the spacetime feature data and a view direction based on the scene information estimation model.
-
公开(公告)号:US20220108136A1
公开(公告)日:2022-04-07
申请号:US17190965
申请日:2021-03-03
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Minjung SON , Hyun Sung CHANG
Abstract: A processor-implemented method using a neural network (NN) includes: receiving input data; and determining information inferred from the input data based on state information about a state in which the NN is activated in response to the input data, wherein an embedding vector generated by encoding the input data using at least a portion of the NN comprises information used to reconstruct a first partial region of the input data with a first accuracy and to reconstruct a second partial region of the input data with a second accuracy, and wherein the first partial region is adaptively determined based on either one or both of the inferred information and the embedding vector.
-
公开(公告)号:US20210182687A1
公开(公告)日:2021-06-17
申请号:US16911784
申请日:2020-06-25
Applicant: Samsung Electronics Co., Ltd.
Inventor: Minjung SON , Hyun Sung CHANG
Abstract: A processor-implemented neural network operating method, the operating method comprising obtaining a neural network pre-trained in a source domain and a first style feature of the source domain, extracting a second style feature of a target domain from received input data of the target domain, using the neural network, performing domain adaptation of the input data, by performing style matching of the input data based on the first style feature of the source domain and the second style feature of the target domain, and processing the style-matched input data, using the neural network.
-
公开(公告)号:US20190188882A1
公开(公告)日:2019-06-20
申请号:US15952389
申请日:2018-04-13
Applicant: Samsung Electronics Co., Ltd.
Inventor: Minjung SON , Hyun Sung CHANG , Donghoon SAGONG
CPC classification number: G06T9/002 , G06F3/048 , G06N3/0454 , G06N3/0472 , G06N3/08 , G06T3/0012 , G06T3/20 , G06T3/60
Abstract: A method and apparatus for processing an image interaction are provided. The apparatus extracts, using an encoder, an input feature from an input image, converts the input feature to a second feature based on an interaction for an application to the input image, and generates, using a decoder, a result image from the second feature.
-
公开(公告)号:US20190122414A1
公开(公告)日:2019-04-25
申请号:US15918054
申请日:2018-03-12
Applicant: Samsung Electronics Co., Ltd.
Inventor: Donghoon SAGONG , Jiyeon KIM , Minjung SON , Hyun Sung CHANG
IPC: G06T15/00
Abstract: A method and apparatus for generating a virtual object are provided, the method includes acquiring a point cloud of an object to generate a virtual object, determining shape attribute information of the object based on an image of the object, changing a position of at least one point in the point cloud based on the shape attribute information, and generating a virtual object for the object based on a changed point cloud.
-
28.
公开(公告)号:US20180107896A1
公开(公告)日:2018-04-19
申请号:US15592461
申请日:2017-05-11
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Donghoon SAGONG , Minjung SON , Hyun Sung CHANG , Young Hun SUNG
CPC classification number: G06K9/6212 , G06K9/4604 , G06K9/4628 , G06K9/4642 , G06K9/4671 , G06K9/6256 , G06K9/66 , G06N3/0454 , G06N3/084 , G06T7/11 , G06T7/187 , G06T2207/20021 , G06T2207/20081
Abstract: Provided are method and apparatuses related to material recognition and training. A training apparatus for material recognition generates training data associated with a material by generating a texture image having a texture attribute from an object image and recognizing material information from the texture image using a material model.
-
-
-
-
-
-
-