Abstract:
A processor-implemented method including generating a depth-aware feature of an image dependent on image features extracted from image data of the image and generating image data, representing information corresponding to one or more segmentations of the image, based on the depth-aware feature and a depth-aware representation, the depth-aware representation being depth-related information and visual-related information for the image.
Abstract:
A processor-implemented method of tracking a target object includes: extracting a feature from frames of an input image; selecting one a neural network model from among a plurality of neural network models that is provided in advance based on a feature value range, based on a feature value of a target object that is included in the feature of a previous frame among the frames; and generating a bounding box of the target object included in a current frame among the frames, based on the selected neural network model.
Abstract:
A method and apparatus with image correction is provided. A processor-implemented method includes generating, using a neural network model provided an input image, an illumination map including illumination values dependent on respective color casts by one or more illuminants individually affecting each pixel of the input image, and generating a white-adjusted image by removing at least a portion of the color casts from the input image using the generated illumination map.
Abstract:
A processor-implemented method includes: determining a probability that a pixel of an input image belongs to each of a plurality of preset categories; and determining a category of the pixel to be a category corresponding to either one or both of a plurality of category areas and a category determined based on the probability that the pixel belongs to each of the preset categories, based on a result of comparing, to a preset threshold value, a probability that the pixel belongs to the category corresponding to the category areas.
Abstract:
A method with object tracking includes: determining a first target tracking state by tracking a target from a first image frame with a first field of view (FoV); determining a second FoV based on the first FoV and the first target tracking state; and generating a second target tracking result by tracking a target from a second image frame with the second FoV.
Abstract:
Disclosed is a method and apparatus for testing a liveness, where the liveness test method includes receiving a color image and a photodiode (PD) image of an object from an image sensor comprising a pixel formed of a plurality of PDs, preprocessing the color image and the PD image, and determining a liveness of the object by inputting a result of preprocessing the color image and a result of preprocessing the PD image into a neural network.
Abstract:
Disclosed are target tracking methods and apparatuses. The target tracking apparatus performs target tracking on an input image obtained in a first time period within a single time frame, using a light neural network in a second time period of the single time frame. The target tracking apparatus may perform target tracking on input images generated within the same time frame.
Abstract:
A method and apparatus for detecting a liveness based on a phase difference are provided. The method includes generating a first phase image based on first visual information of a first phase, generating a second phase image based on second visual information of a second phase, generating a minimum map based on a disparity between the first phase image and the second phase image, and detecting a liveness based on the minimum map.
Abstract:
A user recognition method includes extracting a user feature of a current user from input data, estimating an identifier of the current user based on the extracted user feature, and generating the identifier of the current user in response to an absence of an identifier corresponding to the current user and controlling an updating of user data based on the generated identifier and the extracted user feature.
Abstract:
Example embodiments disclose a method of generating a feature vector, a method of generating a histogram, a learning unit classifier, a recognition apparatus, and a detection apparatus, in which a feature point is detected from an input image based on a dominant direction analysis of a gradient distribution, and a feature vector corresponding to the detected feature point is generated.