Abstract:
An authentication system according to one aspect of the present disclosure includes: at least one memory storing a set of instructions; and at least one processor configured to execute the set of instructions to: track an object included in a video captured by a first capture device; detect a candidate for biometric authentication in the object being tracked; determine whether biometric authentication has been performed for the candidate based on a record of biometric authentication performed for the object being tracked; and perform the biometric authentication for the candidate based on a video of an authentication part of the candidate when the biometric authentication has not been performed for the candidate, the video of the authentication part being captured by a second capture device having a capture range including a part of a capture range of the first capture device.
Abstract:
The disclosure is inputting a first image obtained by capturing an object of authentication moving in a specific direction; inputting a second image at least for one eye obtained by capturing a right eye or a left eye of the object; determining whether the second image is of the left eye or the right eye of the object, based on information including the first image, and outputting a determination result associated with the second image as left/right information; comparing characteristic information relevant to the left/right information, the characteristic information being acquired from a memory that stores the characteristic information of a right eye and a left eye pertaining to object to be authenticated, with characteristic information associated with the left/right information, and calculating a verification score; and authenticating the object captured in the first image and the second image, based on the verification score, and outputting an authentication result.
Abstract:
An image processing device according to one aspect of the present disclosure includes: at least one memory storing a set of instructions; and at least one processor configured to execute the set of instructions to: receive a visible image of a face; receive a near-infrared image of the face; adjust brightness of the visible image based on a frequency distribution of pixel values of the visible image and a frequency distribution of pixel values of the near-infrared image; specify a relative position at which the visible image is related to the near-infrared image; invert adjusted brightness of the visible image; detect a region of a pupil from a synthetic image obtained by adding up the visible image the brightness of which is inverted and the near-infrared image based on the relative position; and output information on the detected pupil.
Abstract:
A mobile body detection device: determines, in the case where a position of a first mobile body and a position of a second mobile body are approximately the same, that a mobile body is detected at the position; and determines, in the case where the position of the first mobile body and the position of the second mobile body are different and any of first reliability and second reliability exceeds a threshold, that a mobile body is detected at a position of a mobile body corresponding to the reliability exceeding the threshold.
Abstract:
Provided are an object detection device, POS terminal device, object detection method, and program capable of carrying out appropriate processing according to whether an object is covered by a translucent container. A distance measurement unit measures the distances to each position on an opposing object. An irradiation unit irradiates light onto the object. A reflected light intensity measurement unit measures the intensities of the reflected light that has been irradiated by the irradiation unit and reflected from each position on the object. An object determination unit determines whether the object is covered by a translucent container based on the distances measured by the distance measurement unit and the reflected light intensities measured by the reflected light intensity measurement unit.
Abstract:
An image collation unit determines whether input image data matches pre-registered image data or a feature quantity of the input image data matches a pre-registered feature quantity of the image data, and stores in a memory at least one of input image data which is determined to match and information which represents the input image data. A complexity computing unit computes complexity of the image data. An image flatness determination unit determines, on the basis of the computed complexity of the image data, whether the image data is data which denotes a flat image. An information processing execution unit executes, when it is determined that newly input image data is data which denotes a flat image and the input image data or the information which represents the input image data is stored in the memory, information processing specified by the input image data and the like stored in the memory.
Abstract:
An image comparison unit (81) compares a query image with a registered image to detect, in the registered image, a region corresponding to the query image. An action information determining unit (82), on the basis of intermediate information in which sub-region information identifying sub-regions in the registered image and action information representing information processing to be executed by a target device are associated with each other, identifies sub-regions on the basis of the sub-region information, chooses a sub-region having the highest degree of matching with the detected region among the identified sub-regions, and identifies action information corresponding to the chosen sub-region. An action information execution unit (83) causes the target device to execute information processing corresponding to the action information.
Abstract:
An image processing device compares multiple images capturing a detection target accumulated at a bottom surface of a transparent container or a detection target accumulated at a surface at which a medium enclosed inside a transparent container contacts another medium inside the transparent container, the images being captured while rotating the transparent container, to determine candidate regions that move in a movement direction in accordance with the rotation, the image processing device determines the presence or absence of the detection target by using first determination results obtained by using a first learning model and image information for the candidate regions to determine whether or not the candidate regions are the detection target, and second determination results obtained by using a second learning model and information indicating a chronological change in the candidate regions to determine whether or not the candidate regions are the detection target.
Abstract:
An information providing device according to one aspect of the present disclosure includes: at least one memory storing a set of instructions; and at least one processor configured to execute the set of instructions to: receive a face image; determine whether a person in the face image is unsuitable for iris data acquisition based on the face image; and output information based on determining that the person is unsuitable for the iris data acquisition when the person is determined to be unsuitable for the iris data acquisition.
Abstract:
An authentication device includes an image acquisition unit, an identification unit, and an authentication unit. The image acquisition unit acquires an image of an eye of a subject. The identification unit identifies the colored pattern of a colored contact lens worn by the subject by comparing a reference image with the image of the eye. The authentication unit identifies the subject, using a feature in a region other than a colored region of the colored pattern in the iris region of the eye.