Abstract:
A three-dimensional (3D) display device for displaying a 3D image using at least one of a gaze direction of a user and a gravity direction includes a gaze direction measuring unit to measure the gaze direction, a data obtaining unit to obtain 3D image data for the 3D image, a viewpoint information obtaining unit to obtain information relating to a viewpoint of the 3D image, a data transform unit to transform the 3D image data, based on the gaze direction and the information relating to the viewpoint of the 3D image, and a display unit to display the 3D image, based on the transformed 3D image data.
Abstract:
A method of segmenting an object from an image includes receiving an input image including an object; generating an output image corresponding to the object from the input image using an image model; and extracting an object image from the output image.
Abstract:
A mobile device configured for data transmission to a corresponding mobile device is provided. The mobile device may include a gesture input unit configured to receive a gesture, a gesture determination unit configured to determine whether the gesture corresponds to a preset gesture associated with a command to perform data transmission to the corresponding mobile device, and a data communication unit configured to transmit a data transmission request to the corresponding mobile device based on a result of the determination, configured to receive, from the corresponding mobile device, an acceptance signal indicating an input of an acceptance gesture at the corresponding mobile device, and configured to transmit data to the corresponding mobile device in response to receiving the acceptance signal.
Abstract:
An apparatus and method for parsing a human body image may be implemented by acquiring a depth image including a human body, and detecting a plurality of points in the acquired depth image by conducting a minimum energy skeleton scan on the depth image.
Abstract:
An estimator training method and a pose estimating method using a depth image are disclosed, in which the estimator training method may train an estimator configured to estimate a pose of an object, based on an association between synthetic data and real data, and the pose estimating method may estimate the pose of the object using the trained estimator.
Abstract:
A method of obtaining depth information and a display apparatus may adjust a sensor area of a sensor panel based on a distance from an object, and may obtain depth information of the object based on the adjusted sensor area.
Abstract:
Provided is an apparatus and method for detecting body parts, the method including identifying a group of sub-images relevant to a body part in an image to be detected, assigning a reliability coefficient for the body part to the sub-images in the group of sub-images based on a basic vision feature of the sub-images and an extension feature of the sub-images to neighboring regions, and detecting a location of the body part by overlaying sub-images having reliability coefficients higher than a threshold value.
Abstract:
Disclosed is a face verification method and apparatus. The method including analyzing a current frame of a verification image, determining a current frame state score of the verification image indicating whether the current frame is in a state predetermined as being appropriate for verification, determining whether the current frame state score satisfies a predetermined validity condition, and selectively, based on a result of the determining of whether the current frame state score satisfies the predetermined validity condition, extracting a feature from the current frame and performing verification by comparing a determined similarity between the extracted feature and a registered feature to a set verification threshold.
Abstract:
Disclosed is a face verification method and apparatus. A mobile device may include one or more processors configured to obtain one or more images for a user, ascertain whether any of the one or more images correspond to respective user distances, from the user to the mobile device, outside of a threshold range of distances, and selectively, based on a result of the ascertaining, perform verification using a first verification threshold for any of the one or more images ascertained to correspond to the respective user distances that are outside the threshold range of distances, and perform verification using a less strict second verification threshold for any of the one or more images that have been ascertained to not correspond to the respective user distances that are outside the threshold range of distances.
Abstract:
A face verifying method and apparatus. The face verifying method includes detecting a face region from an input image, generating a synthesized face image by combining image information of the face region and reference image information, based on a determined masking region, extracting one or more face features from the face region and the synthesized face image, performing a verification operation with respect to the one or more face features and predetermined registration information, and indicating whether verification of the input image is successful based on a result of the performed verification operation.