Abstract:
The disclosure is inputting a first image captured an image of an authentication target; inputting a second image captured an image of a right eye or a left eye of the target; determining whether the second image is of a left eye or a right eye of the target based on information including the first image, and outputting a determination result as left/right information in association with the second image; detecting an overlap between a region including the second image and a predetermined region in the first image; calculating a verification score by comparing characteristic information that are related to the left/right information with iris characteristic information calculated from the second image, and calculating a first weighted verification score obtained by weighting the verification score with a detection result; and authenticating a target in the second image based on the first weighted verification score, and outputting an authentication result.
Abstract:
The disclosure is an information processing device including: a memory; and at least one processor coupled to the memory and performing operations. The operations includes: generating, based on a target image including a target and a standard image not including the target, a target structure image indicating a shape feature of an object included in the target image and a standard structure image indicating a shape feature of an object included in the standard image, in each of a plurality of imaging conditions; calculating an individual difference being a difference between the target structure image and the standard structure image, and a composite difference based on the individual differences; calculating an individual smoothness being a smoothness of the target structure image, and a composite smoothness based on the individual smoothness; and generating a silhouette image of the target, based on the composite difference and the composite smoothness.
Abstract:
The present invention enables a high-quality filtered image to be generated even from multimodal and multispectral images containing positional deviations. An image perturbation part generates a perturbed guide image group comprising first to K-th perturbed guide images obtained by deforming a guide image. A filtering part generates a filtered image group comprising first to K-th filtered images by applying first to K-th filtering processing to a target image by using the perturbed guide image group. A reliability calculation part calculates a reliability group comprising first to K-th reliabilities for the first to K-th filtered images of the filtered image group on the basis of first to K-th correlation values between the first to K-th perturbed guide images and the target image. A weight optimization part generates, on the basis of the first to K-th reliabilities, a weight group comprising first to K-th weights to be respectively used when compositing the first to K-th filtered images. An output image compositing part composites an output image from the weight group and the filtered image group.
Abstract:
An image processing system in which, in order to easily analyze input images acquired by sensors, output image quality is improved so that is suitable for a user. It includes: a gradient calculation unit that calculates a desired gradient based on input images; an indication function calculation unit that calculates an indication function for the input images, the indication function defining a range that can be taken by an output image and pixel values of a reference image; a pixel value renewal unit that renews pixel values of one of the input images so as to approximate the desired gradient to produce a renewed image; and a pixel value constraint unit that updates pixel values of the renewed image so as to fall within the range that can be taken by the output image and to approximate the pixel values of the reference image, to thereby obtain the output image.
Abstract:
An information processing device, apparatus, method and non-transitory computer-readable storage medium are disclosed. An information processing device may include a memory storing instructions, and at least one processor configured to process the instructions to generate a comparison image by transforming a reference image, associate the comparison image with a class variable representing an object included in the reference image, calculate a degree of difference between an input patch which is an image representing a sub-region of an input image and a comparison patch which is an image representing a sub-region of the comparison image, estimate a displacement vector between the input patch and the comparison patch, calculate a first degree of reliability corresponding to the displacement vector and the class variable on the basis of the displacement vector and the degree of difference, calculate a second degree of reliability for each comparison patch on the basis of the first degree of reliability, and identify the object is represented by the class variable associated with the comparison image including the comparison patch whose second degree of reliability is greater than a predetermined threshold value, as a recognition target.
Abstract:
An image processing device according to the present invention includes: a weight calculation unit that determines an area where a feature value of an input image is saved, based on a gradient of a feature value of a pixel of the input image and a direction of the gradient, and calculates a weight for reducing a regularization constraint that is a constraint based on regularization of image processing in the area where the feature value is saved; a regularization term calculation unit that calculates a regularization constraint of a high resolution image restored based on the input image by using the weight; a reconstruction constraint calculation unit that calculates a reconstruction constraint that is a constraint based on reconstruction of the high resolution image; and an image restoring unit that restores the high resolution image from the input image based on the regularization constraint and the reconstruction constraint.
Abstract:
An information display apparatus includes: a collation unit that specifies a position of an object in an overhead image obtained by capturing an image of a region that includes the object before a disaster, based on a result of collating the overhead image with a section in a target image that includes a situation of the object after the disaster, the section satisfying a criterion for determining that an influence of the disaster is small; and a display unit that displays an image that includes the situation of the object and information by which the position of the object in the overhead image can be specified.
Abstract:
A surface property estimation system includes an image acquisition means for acquiring an image of a surface of an object, an estimation means for estimating a surface property from the acquired image by using an estimation model obtained through machine learning with use of an image of a surface of an object and a surface property shown by the image as training data, an extraction means for extracting, from the acquired image, a feature amount unique to the image, and a registration means for storing the estimated surface property and the extracted feature amount in a storage means in association with each other.
Abstract:
An information providing device according to one aspect of the present disclosure includes: at least one memory storing a set of instructions; and at least one processor configured to execute the set of instructions to: receive a face image; determine whether a person in the face image is unsuitable for iris data acquisition based on the face image; and output information based on determining that the person is unsuitable for the iris data acquisition when the person is determined to be unsuitable for the iris data acquisition.
Abstract:
An authentication device includes an image acquisition unit, an identification unit, and an authentication unit. The image acquisition unit acquires an image of an eye of a subject. The identification unit identifies the colored pattern of a colored contact lens worn by the subject by comparing a reference image with the image of the eye. The authentication unit identifies the subject, using a feature in a region other than a colored region of the colored pattern in the iris region of the eye.