Abstract:
A method and an electronic device with image registration are provided. The method includes generating a first optical flow between a first partial image and a second partial image, which are captured by respective first and second cameras using an optical flow estimation model; generating disparity information between the first partial image and a third partial image, captured by a third camera, based on depth information of the first partial image generated using the first optical flow; and estimating a second optical flow between the first partial image and the third partial image based on the generated disparity information for generating a registration image.
Abstract:
An image processing method includes receiving an image frame, detecting a face region of a user in the image frame, aligning a plurality of preset feature points in a plurality of feature portions included in the face region, performing a first check on a result of the aligning based on a first region corresponding to a combination of the feature portions, performing a second check on the result of the aligning based on a second region corresponding to an individual feature portion of the feature portions, redetecting a face region based on a determination of a failure in passing at least one of the first check or the second check, and outputting information on the face region based on a determination of a success in passing the first check and the second check.
Abstract:
A processor-implemented imaging method includes: obtaining initial homography information between a plurality of tele images that covers a field of view (FOV) of a wide image; receiving a user input of zooming a partial region of the wide image from a screen on which the wide image is displayed; stitching tele images corresponding to the partial region using the initial homography information, based on whether a zoom level corresponding to the user input exceeds a maximum zoom level of the wide image; and rendering the stitched tele images and displaying an image obtained by the rendering on the screen.
Abstract:
Provided is a method of processing data corresponding to a fingerprint image including obtaining first image data corresponding to a group including a plurality of pixels, and dividing the first image data into second image data corresponding to each of the plurality of pixels based on a plurality of weights corresponding to the plurality of pixels, respectively.
Abstract:
A fingerprint verification method and apparatus are provided. The fingerprint verification method includes performing a first matching between a fingerprint image and a first registered fingerprint image; based on a result of the first matching, performing a second matching between the fingerprint image and a second registered fingerprint image, the second registered fingerprint image being different from the first registered fingerprint image; and verifying the fingerprint based on the result of the first matching and a result of the second matching.
Abstract:
An image processing apparatus includes a calculator configured to calculate a respective position offset for each of a plurality of candidate areas in a second frame based on a position of a basis image in a first frame and a determiner configured to determine a final selected area that includes a target in the second frame based on a respective weight allocated to each of the plurality of candidate areas and the calculated respective position offset.
Abstract:
A method of determining eye position information includes identifying an eye area in a facial image; verifying a two-dimensional (2D) feature in the eye area; and performing a determination operation including, determining a three-dimensional (3D) target model based on the 2D feature; and determining 3D position information based on the 3D target model.
Abstract:
A method of modeling a structure of a coronary artery of a subject may include: forming a learning-based shape model of the structure of the artery, based on positions of landmarks acquired from three-dimensional images; receiving a target image; and/or modeling the artery structure included in the target image, using the model. An apparatus for modeling a structure of a coronary artery may include: a memory configured to store a learning-based shape model of the artery, the learning-based shape model being formed based on positions of a plurality of landmarks acquired from three-dimensional images, the plurality of the landmarks corresponding to the artery; a communication circuit configured to receive a target image; and/or a processing circuit configured to model the artery structure included in the target image, using the model.
Abstract:
Provided is an image processing method of an image processing model, the image processing method including obtaining an input image group, the input image group including a plurality of low-resolution images corresponding to a plurality of different viewpoints, respectively, obtaining a feature of low-resolution images by extracting a feature for each low-resolution image of the plurality of low-resolution images included in the input image group, obtaining a fusion residual feature by fusing the feature of low-resolution images, and obtaining a super-resolution image corresponding to the input image group based on the fusion residual feature.
Abstract:
Provided is a method and apparatus for eye tracking. An eye tracking method includes detecting an eye area corresponding to an eye of a user in a first frame of an image; determining an attribute of the eye area; selecting an eye tracker from a plurality of different eye trackers, the eye tracker corresponding to the determined attribute of the eye area; and tracking the eye of the user in a second frame of the image based on the selected eye tracker, the second frame being subsequent to the first frame.