Abstract:
A category selection portion selects a face orientation based on an error between the positions of feature points (the eyes and the mouth) on the faces of each face orientation and the positions of feature points, corresponding to the feature points on the faces of each category, on the face of a collation face image. A collation portion collates the registered face images of the face orientation selected by the category selection portion and the collation face image with each other, and the face orientations are determined so that face orientation ranges where the error with respect to each individual face orientation is within a predetermined value are in contact with each other or overlap each other. The collation face image and the registered face images can be more accurately collated with each other.
Abstract:
There are provided a comparison device and a comparison method including determining whether or not a comparison target person is a subject of a registered face image by comparing an imaged face image to the registered face image, of determining whether or not a blocking object is provided in the imaged face image, of determining whether or not removing the blocking object is required, by calculating a partial similarity score between the imaged face image and the registered face image in a partial area corresponding to the blocking object, and of urging the blocking object to be detached based on the partial similarity score.
Abstract:
Provided is a cosmetic assist device that can extract a cosmetic technique for making a user's face look like a target face image. The device includes an image capturing unit for capturing a face image of a user, an input unit for inputting a target face image, a synthesized face image generating unit for generating a plurality of synthesized face images obtained by applying mutually different cosmetic techniques on the face image of the user, a similarity determination unit for determining a degree of similarity between each synthesized face image and the target face image, and a cosmetic technique extraction unit for extracting one of the cosmetic techniques that was used to obtain the synthesized face image determined to have a highest degree of similarity by the similarity determination unit.
Abstract:
An imaged face image of a comparison target person is compared to a registered face image. When a comparison score indicating a result of the comparison is equal to or smaller than Th1 and it is determined that the comparison target person in the imaged face image is not a subject of the registered face image, it is determined whether or not the comparison score is equal to or greater than Th2 (Th2
Abstract:
A monitoring device includes a process condition setter that sets a target area on the monitored moving image according to a user's operation input, a stay information acquirer that observes a staying situation of a moving object appearing on the monitored moving image and acquires stay information indicating the staying situation of the moving object, a sub-image generator that generates the sub-image, a sub-image arrangement controller that controls an arrangement position of the sub-image on the monitored moving image based on the stay information acquired by the stay information acquirer, and an output controller that generates the monitoring moving image in which the sub-image is composed on the monitored moving image based on the arrangement position of the sub-image determined by the sub-image arrangement controller and outputs the monitoring moving image on the display device.
Abstract:
An imaging position determination device includes an image reception unit that acquires an image and a position of a person within a monitoring area, an eye state detection unit that detects an open and closed state of eyes of a person from the image acquired by the image reception unit, an eye state map creation unit that creates an eye state map which shows an eye state of the person in the monitoring area based on the open and closed state of eyes of the person that is acquired by the eye state detection unit, and an adjustment amount estimation unit that determines an imaging position of the person in the monitoring area based on the eye state map that is created by the eye state map creation unit.
Abstract:
There is provided a collation device capable of capturing a face image suitable for collation by an imaging device while making a display suitable for collation by a display to a person in the front side of a half mirror. By disposing a plurality of displays with predetermined gap in the back side of the half mirror and disposing camera lenses and in this gap, it is possible to make the line of sight for viewing the display and the line of sight for imaging by the cameras be almost the same. As a result, it is possible to capture a face image suitable for collation by an imaging device while making a display suitable for collation by a display to a person in the front side of the half mirror.
Abstract:
A facial authentication device (100) includes an image corrector (107) that estimates an orientation of a face based on a center position of the face and a position of imaging unit (101) to correct an image distortion including optical axis deviation with respect to visible light image data such that the orientation of the face coincides with an optical axis direction of imaging unit (101), and a feature amount calculator (105) that extracts a face portion from the image data captured by the imaging unit (101) and calculates a feature amount of the face to output to the image corrector (107), and calculates the feature amount of the face from the image data corrected by the image corrector (107) to output to a face collator (109).
Abstract:
In order to eliminate erroneous detection in a case where a plurality of facial regions are detected in a captured image, facial detection device (2) of the present disclosure is a facial detection device that detects a facial region of a person from captured images which are continuous in time series, including a processor (15) that performs facial detection processing of detecting the facial region from the captured images and error determination processing of calculating a moving direction of each facial region between the captured images that are sequential in time series, and determining whether or not the detection as a facial region is correct with respect to a plurality of facial regions in which a correlation degree in the moving directions of the facial regions is equal to or larger than a predetermined threshold value, in a case where a plurality of facial regions are detected.
Abstract:
Provided is a stay condition analyzing apparatus including a stay information acquirer which acquires stay information for each predetermined measurement period of time on the basis of positional information of a moving object which is acquired from a captured image of a target area, a heat map image generator which generates a heat map image obtained by visualizing the stay information, a background image generator which generates a background image from the captured image, and a display image generator which generates a display image by superimposing the heat map image on the background image. The background image generator generates the background image by performing image processing for reducing discriminability of the moving object appearing in the captured image on the captured image.