Abstract:
A method for analyzing face information in an electronic device is provided. The method includes detecting at least one face region from an image that is being captured by a camera module, zooming in the at least one detected face region, and analyzing the at least one detected and zoomed in face region according to at least one analysis item.
Abstract:
A method for analyzing face information in an electronic device is provided. The method includes detecting at least one face region from an image that is being captured by a camera module, zooming in the at least one detected face region, and analyzing the at least one detected and zoomed in face region according to at least one analysis item.
Abstract:
A display apparatus is provided. The display apparatus includes: a display unit; a sensor configured to sense a touch input into the display unit; an eye direction sensor configured to sense an eye direction of a user; and a controller configured to divide a screen of the display unit into a plurality of areas, in response to an eye of the user being sensed toward a first area of the plurality of areas and a first touch input being sensed into the first area, perform a first control operation corresponding to the first touch input, and in response to a second touch input being sensed into the other areas except the first area, not perform a second control operation corresponding to the second touch input. Therefore, whether a function will be executed is determined according to an area in which an eye of a user is sensed and an area in which the eye of the user is not sensed. As a result, a malfunction of a display apparatus is prevented.
Abstract:
A method and mobile terminal for correcting a gaze of a user in an image includes setting eye outer points that define an eye region of the user in an original image, transforming the set eye outer points to a predetermined reference camera gaze direction, and transforming the eye region of the original image based on the transformed eye outer points.
Abstract:
A display apparatus includes an infrared outputter configured to output infrared light toward a user, an image capturer configured to photograph the user to generate a captured image, and a controller configured to detect, from the captured image, a pupil and an iris of the user, and a glint area generated by the infrared light, and to determine, in response to the pupil, the iris, and the glint area being detected, a direction of the user's gaze based on a relation between a location of the pupil, and the glint area, and a size of the iris.
Abstract:
A display apparatus includes an infrared outputter configured to output infrared light toward a user, an image capturer configured to photograph the user to generate a captured image, and a controller configured to detect, from the captured image, a pupil and an iris of the user, and a glint area generated by the infrared light, and to determine, in response to the pupil, the iris, and the glint area being detected, a direction of the user's gaze based on a relation between a location of the pupil, and the glint area, and a size of the iris.