Abstract:
A lightened neural network, method, and apparatus, and recognition method and apparatus implementing the same. A neural network includes a plurality of layers each comprising neurons and plural synapses connecting neurons included in neighboring layers. Synaptic weights with values greater than zero and less than a preset value of a variable a, which is greater than zero, may be at least partially set to zero. Synaptic weights with values greater than a preset value of a variable b, which is greater than zero, may be at least partially set to the preset value of the variable b.
Abstract:
Authentication methods and apparatuses are disclosed. An authentication method may include generating a quality profile of an authentication image, and determining an effective region in the authentication image based on the generated quality profile.
Abstract:
An apparatus and method for detecting a fake fingerprint is disclosed. The apparatus may divide an input fingerprint image into blocks, determine an image quality assessment (IQA) value associated with each block, determine a confidence value based on the IQA values using a confidence determination model, and determine whether an input fingerprint in the input fingerprint image is a fake fingerprint based on the determined confidence value.
Abstract:
A method of recognizing a feature of an image may include receiving an input image including an object; extracting first feature information using a first layer of a neural network, the first feature information indicating a first feature corresponding to the input image among a plurality of first features; extracting second feature information using a second layer of the neural network, the second feature information indicating a second feature among a plurality of second features, the indicated second feature corresponding to the first feature information; and recognizing an element corresponding to the object based on the first feature information and the second feature information.
Abstract:
A display apparatus and method may be used to estimate a depth distance from an external object to a display panel of the display apparatus. The display apparatus may acquire a plurality of images by detecting lights that are input from an external object and passed through apertures formed in a display panel, may generate one or more refocused images, and may calculate a depth from the external object to the display panel using the plurality of images acquired and one or more refocused images.
Abstract:
A mobile device configured for data transmission to a corresponding mobile device is provided. The mobile device may include a gesture input unit configured to receive a gesture, a gesture determination unit configured to determine whether the gesture corresponds to a preset gesture associated with a command to perform data transmission to the corresponding mobile device, and a data communication unit configured to transmit a data transmission request to the corresponding mobile device based on a result of the determination, configured to receive, from the corresponding mobile device, an acceptance signal indicating an input of an acceptance gesture at the corresponding mobile device, and configured to transmit data to the corresponding mobile device in response to receiving the acceptance signal.
Abstract:
A convolutional neural network (CNN) processing method includes selecting a survival network in a precision convolutional network based on a result of performing a high speed convolution operation between an input and a kernel using a high speed convolutional network, and performing a precision convolution operation between the input and the kernel using the survival network.
Abstract:
Disclosed is a face verification method and apparatus. A mobile device may include one or more processors configured to obtain one or more images for a user, ascertain whether any of the one or more images correspond to respective user distances, from the user to the mobile device, outside of a threshold range of distances, and selectively, based on a result of the ascertaining, perform verification using a first verification threshold for any of the one or more images ascertained to correspond to the respective user distances that are outside the threshold range of distances, and perform verification using a less strict second verification threshold for any of the one or more images that have been ascertained to not correspond to the respective user distances that are outside the threshold range of distances.
Abstract:
A lightened neural network method and apparatus. The neural network apparatus includes a processor configured to generate a neural network with a plurality of layers including plural nodes by applying lightened weighted connections between neighboring nodes in neighboring layers of the neural network to interpret input data applied to the neural network, wherein lightened weighted connections of at least one of the plurality of layers includes weighted connections that have values equal to zero for respective non-zero values whose absolute values are less than an absolute value of a non-zero value. The lightened weighted connections also include weighted connections that have values whose absolute values are no greater than an absolute value of another non-zero value, the lightened weighted connections being lightened weighted connections of trained final weighted connections of a trained neural network whose absolute maximum values are greater than the absolute value of the other non-zero value.
Abstract:
Disclosed is an image fusion method and apparatus. The fusion method includes detecting first feature points of an object in a first image frame from the first image frame; transforming the first image frame based on the detected first feature points and predefined reference points to generate a transformed first image frame; detecting second feature points of the object in a second image frame from the second image frame; transforming the second image frame based on the detected second feature points and the predefined reference points to generate a transformed second image frame; and generating a combined image by combining the transformed first image frame and the transformed second image frame.