-
公开(公告)号:US11688105B2
公开(公告)日:2023-06-27
申请号:US17109762
申请日:2020-12-02
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Tianchu Guo , Youngsung Kim , Hui Zhang , Byungin Yoo , Chang Kyu Choi , Jae-Joon Han , Jingtao Xu , Deheng Qian
CPC classification number: G06T11/00 , G06F17/10 , G06T7/248 , G06V10/757 , G06V40/166 , G06V40/168 , G06V40/171 , G06V40/174 , G06T2207/30201
Abstract: A processor implemented method of processing a facial expression image, the method includes controlling a camera to capture a first facial expression image and a second facial expression image, acquiring a first expression feature of the first facial expression image, acquiring a second expression feature of the second facial expression image, generating a new expression feature dependent on differences between the acquired first expression feature and the acquired second expression feature, and adjusting a target facial expression image based on the new expression feature.
-
公开(公告)号:US11663728B2
公开(公告)日:2023-05-30
申请号:US17146752
申请日:2021-01-12
Applicant: Samsung Electronics Co., Ltd.
Inventor: Jiaqian Yu , Jingtao Xu , Yiwei Chen , Byung In Yoo , Chang Kyu Choi , Hana Lee , Jaejoon Han , Qiang Wang
IPC: G06T7/50 , H01L27/146
CPC classification number: G06T7/50 , H01L27/14605 , G06T2207/10024
Abstract: A depth estimation method and apparatus are provided. The depth estimation method includes obtaining an image from an image sensor comprising upper pixels, each comprising N sub-pixels, obtaining N sub-images respectively corresponding to the N sub-pixels from the image, obtaining a viewpoint difference between the N sub-images using a first neural network, and obtaining a depth map of the image based on the viewpoint difference using a second neural network.
-
公开(公告)号:US11636575B2
公开(公告)日:2023-04-25
申请号:US17507872
申请日:2021-10-22
Applicant: Samsung Electronics Co., Ltd.
Inventor: Chang Kyu Choi , Youngjun Kwak , Seohyung Lee
Abstract: A processor-implemented method of generating feature data includes: receiving an input image; generating, based on a pixel value of the input image, at least one low-bit image having a number of bits per pixel lower than a number of bits per pixel of the input image; and generating, using at least one neural network, feature data corresponding to the input image from the at least one low-bit image.
-
公开(公告)号:US11514602B2
公开(公告)日:2022-11-29
申请号:US16722304
申请日:2019-12-20
Applicant: Samsung Electronics Co., Ltd.
Inventor: Tianchu Guo , Hui Zhang , Xiabing Liu , Chang Kyu Choi , Jaejoon Han , Yongchao Liu
Abstract: A gaze estimation method includes receiving, by a processor, input data including a current image and a previous image each including a face of a user, determining, by the processor, a gaze mode indicating a relative movement between the user and a camera that captured the current image and the previous image based on the input data, and estimating, by the processor, a gaze of the user based on the determined gaze mode, wherein the determined gaze mode is one of a plurality of gaze modes comprising a stationary mode and a motion mode.
-
公开(公告)号:US11449971B2
公开(公告)日:2022-09-20
申请号:US16223382
申请日:2018-12-18
Applicant: Samsung Electronics Co., Ltd.
Inventor: Minsu Ko , Seungju Han , Jaejoon Han , Jihye Kim , SungUn Park , Chang Kyu Choi
Abstract: Disclosed is an image fusion method and apparatus. The fusion method includes detecting first feature points of an object in a first image frame from the first image frame; transforming the first image frame based on the detected first feature points and predefined reference points to generate a transformed first image frame; detecting second feature points of the object in a second image frame from the second image frame; transforming the second image frame based on the detected second feature points and the predefined reference points to generate a transformed second image frame; and generating a combined image by combining the transformed first image frame and the transformed second image frame.
-
公开(公告)号:US11423702B2
公开(公告)日:2022-08-23
申请号:US17089902
申请日:2020-11-05
Applicant: Samsung Electronics Co., Ltd.
Inventor: SeungJu Han , SungUn Park , JaeJoon Han , Jinwoo Son , ChangYong Son , Minsu Ko , Jihye Kim , Chang Kyu Choi
Abstract: An object recognition apparatus and method are provided. The apparatus includes a processor configured to verify a target image using an object model and based on reference intermediate data extracted by a partial layer of the object model as used in an object recognition of an input image, in response to a failure of a verification of the input image after a success of the object recognition of the input image, and perform an additional verification of the target image in response to the target image being verified in the verifying of the target image.
-
公开(公告)号:US11366978B2
公开(公告)日:2022-06-21
申请号:US16295400
申请日:2019-03-07
Applicant: Samsung Electronics Co., Ltd.
Inventor: Insoo Kim , Kyuhong Kim , Chang Kyu Choi
IPC: G10L15/16 , G10L17/02 , G10L17/04 , G06K9/00 , G06N20/10 , G06K9/62 , G06N3/08 , G10L15/02 , G10L17/18
Abstract: A data recognition method includes: extracting a feature map from input data based on a feature extraction layer of a data recognition model; pooling component vectors from the feature map based on a pooling layer of the data recognition model; and generating an embedding vector by recombining the component vectors based on a combination layer of the data recognition model.
-
公开(公告)号:US11222263B2
公开(公告)日:2022-01-11
申请号:US15630610
申请日:2017-06-22
Applicant: Samsung Electronics Co., Ltd.
Inventor: Changyong Son , Jinwoo Son , Byungin Yoo , Chang Kyu Choi , Jae-Joon Han
Abstract: A lightened neural network method and apparatus. The neural network apparatus includes a processor configured to generate a neural network with a plurality of layers including plural nodes by applying lightened weighted connections between neighboring nodes in neighboring layers of the neural network to interpret input data applied to the neural network, wherein lightened weighted connections of at least one of the plurality of layers includes weighted connections that have values equal to zero for respective non-zero values whose absolute values are less than an absolute value of a non-zero value. The lightened weighted connections also include weighted connections that have values whose absolute values are no greater than an absolute value of another non-zero value, the lightened weighted connections being lightened weighted connections of trained final weighted connections of a trained neural network whose absolute maximum values are greater than the absolute value of the other non-zero value.
-
公开(公告)号:US11093805B2
公开(公告)日:2021-08-17
申请号:US16426315
申请日:2019-05-30
Applicant: Samsung Electronics Co., Ltd.
Inventor: Seungju Han , Sungjoo Suh , Jaejoon Han , Chang Kyu Choi
Abstract: A method of recognizing a feature of an image may include receiving an input image including an object; extracting first feature information using a first layer of a neural network, the first feature information indicating a first feature corresponding to the input image among a plurality of first features; extracting second feature information using a second layer of the neural network, the second feature information indicating a second feature among a plurality of second features, the indicated second feature corresponding to the first feature information; and recognizing an element corresponding to the object based on the first feature information and the second feature information.
-
公开(公告)号:US10789455B2
公开(公告)日:2020-09-29
申请号:US16148587
申请日:2018-10-01
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Byungin Yoo , Youngjun Kwak , Jungbae Kim , Jinwoo Son , Changkyo Lee , Chang Kyu Choi , Jaejoon Han
Abstract: A liveness test method and apparatus is disclosed. A processor implemented liveness test method includes extracting an interest region of an object from a portion of the object in an input image, performing a liveness test on the object using a neural network model-based liveness test model, the liveness test model using image information of the interest region as provided first input image information to the liveness test model and determining liveness based at least on extracted texture information from the information of the interest region by the liveness test model, and indicating a result of the liveness test.
-
-
-
-
-
-
-
-
-