Abstract:
A method and apparatus for estimating a pose that estimates a pose of a user using a depth image is provided, the method including, recognizing a pose of a user from a depth image, and tracking the pose of the user using a user model exclusively of one another to enhance precision of estimating the pose.
Abstract:
A fingerprint recognition based authentication method and apparatus is disclosed. The authentication apparatus may obtain an input fingerprint from a touch input of a user, determine an input number corresponding to the input fingerprint using preregistered fingerprint-number mapping information, and authenticate the user based on whether an input number sequence corresponding to an input fingerprint sequence is identical to a reference number sequence.
Abstract:
A fingerprint recognition method includes generating an enrollment modified image by modifying a fingerprint image corresponding to a fingerprint to be enrolled; extracting enrollment property information from the fingerprint image; generating mapping information that maps the enrollment modified image to the enrollment property information; and storing the enrollment modified image and the enrollment property information.
Abstract:
An object recognition system is provided. The object recognition system for recognizing an object may include an input unit to receive, as an input, a depth image representing an object to be analyzed, and a processing unit to recognize a visible object part and a hidden object part of the object, from the depth image, by using a classification tree. The object recognition system may include a classification tree learning apparatus to generate the classification tree.
Abstract:
A display apparatus and method may be used to estimate a depth distance from an external object to a display panel of the display apparatus. The display apparatus may acquire a plurality of images by detecting lights that are input from an external object and passed through apertures formed in a display panel, may generate one or more refocused images, and may calculate a depth from the external object to the display panel using the plurality of images acquired and one or more refocused images.
Abstract:
An apparatus for processing a depth image using a relative angle between an image sensor and a target object includes an object image extractor to extract an object image from the depth image, a relative angle calculator to calculate a relative angle between an image sensor used to photograph the depth image and a target object corresponding to the object image, and an object image rotator to rotate the object image based on the relative angle and a reference angle.
Abstract:
A display device, and a method of operating and manufacturing the display device may receive input light from an object to be scanned that is positioned in front of a display for displaying an image, and may perform scanning of the object to be scanned.
Abstract:
An authentication method and apparatus using a transformation model are disclosed. The authentication method includes generating, at a first apparatus, a first enrolled feature based on a first feature extractor, obtaining a second enrolled feature to which the first enrolled feature is transformed, determining an input feature by extracting a feature from input data with a second feature extractor different from the first feature extractor, and performing an authentication based on the second enrolled feature and the input feature.
Abstract:
A method and apparatus for processing an image based on partial images. The method includes extracting a feature of a current partial processing region of an input image frame by inputting pixel data of the current partial processing region into a convolutional neural network (CNN), updating a hidden state of a recurrent neural network (RNN) for a context between the current partial processing region and at least one previous partial processing region by inputting the extracted feature into the RNN, and generating an image processing result for the input image frame based on the updated hidden state.
Abstract:
A processor-implemented liveness test method includes detecting a face region in a query image, the query image including a test object for a liveness test, determining a liveness test condition to be applied to the test object among at least one liveness test condition for at least one registered user registered in a registration database, determining at least one test region in the query image based on the detected face region and the determined liveness test condition, obtaining feature data of the test object from image data of the determined at least one test region using a neural network-based feature extractor, and determining a result of the liveness test based on the obtained feature data and registered feature data registered in the registration database and corresponding to the determined liveness test condition.