Abstract:
A speech determiner determines whether or not a target individual is speaking when facial images of the target individual are captured. An emotion estimator estimates the emotion of the target individual using the facial images of the target individual, on the basis of the determination results of the speech determiner.
Abstract:
An image processing device is provided with an acquiring unit configured to acquire a face image and, a control unit configured to specify a face direction in the face image acquired by the acquiring unit and add, based on the specified face direction, a picture expressing a contour of a face component to the face image.
Abstract:
A reconstructed image which further reflects the photo shooting information of a main object is generated. A layer determiner defines a layer of a reconstructed image. A layer image generator reconstructs an image of an object included in the allocated depth range from a light field image and the depth map of the light field image for each layer. A conversion pixel extractor extracts corresponding pixels on a conversion layer which corresponds to an object to be modified. The object to be modified is designated by an operation acquired by a modification operation acquirer. A reconstructed image generator converts layer images using a conversion matrix defined by a conversion matrix determiner and generates an image whose composition has been modified by superimposing the layer images.
Abstract:
A bioinformation acquiring apparatus includes at least one processor; and a memory configured to store a program to be executed in the processor. The processor acquires bioinformation in a chronological order; derives outlier level parameters, the outlier level parameter indicating a level of inclusion of outliers of the bioinformation in pieces of bioinformation acquired within a first duration; derives correction terms based on the bioinformation after removal of the outliers of the bioinformation from pieces of bioinformation acquired within a second duration that is longer than the first duration; selects one or both of a first correction procedure and a second correction procedure based on the outlier level parameters, as a correction procedure, the first correction procedure using the correction terms, the second correction procedure involving interpolation irrelevant to the correction terms; and corrects the outliers of the bioinformation within the first duration by the selected correction procedure.
Abstract:
A voice recognition device provided with a processor configured to determine a breathing period immediately before uttering which is a period in which a lip of a target person has moved with breathing immediately before uttering based on a captured image of the lip of the target person, to detect a voice period which is a period in which the target person is uttering without including the breathing period immediately before uttering determined above based on the captured image of the lip of the target person captured, to acquire a voice of the target person, and to recognize the voice of the target person based on the voice of the target person acquired above within the voice period detected above.
Abstract:
The present invention is to reduce the time required to detect an object after completion of the rotation of a head or a body of a robot. A robot 100 includes a camera 111, and a control unit 127 which determines an overlapping area between a first image captured with the camera 111 at first timing and a second image captured with the camera 111 at second timing later than the first timing to detect an object included in an area of the second image other than the determined overlapping area.
Abstract:
A determination result is easily obtained even in expression determination on a face image that is not a front view. A Robot includes a camera, a face detector, a face angle estimator, and an expression determiner. The camera acquires image data. The face detector detects a face of a person from the image data acquired by the camera. The face angle estimator estimates an angle of the face detected by the face detector. The expression determiner determines an expression of the face based on the angle estimated by the face angle estimator.
Abstract:
A notification control apparatus is provided with a moving image acquisition unit 41 that acquires a moving image, an identification unit 44 that identifies a frame according to forwarding operation, out of a plurality of frames constituting the moving image, a determination unit 46 that determines whether or not the identified frame is a predetermined frame, and a notification control unit 49 that notifies determination result by the determination unit 46.
Abstract:
A multi-class identifier identifies a kind of an imager, and identifies in detail with respect to a specified kind of a group. The multi-class identifier includes: an identification fault counter providing the image for test that includes any of class labels to the kind identifiers so that the kind identifiers individually identify the kind of the provided image, and counting, for a combination of arbitrary number of kinds among the plurality of kinds, the number of times of incorrect determination in the arbitrary number of kinds that belongs to the combination; a grouping processor, for a group of the combination for which count result is equal to or greater than a predetermined threshold, adding a group label corresponding to the group to the image for learning that includes the class label corresponding to any of the arbitrary number of kinds that belongs to the group.
Abstract:
The image acquisition unit 41 acquires an image including an object. By comparing information related to the shape of a relevant natural object that is included as the object in the target image acquired by the image acquisition unit 41, and information related to respective shapes of a plurality of types prepared in advance, at least one flower type for the natural object in question is selected. The secondary selection unit 43 then selects data of a representative image from among data of a plurality of images of different color, of the same flower type as prepared in advance, for each of at least one flower type selected by the primary selection unit 42, based on information related to color of the relevant natural object included as the object in the image acquired by the image acquisition unit 41.