Abstract:
A bioinformation acquiring apparatus includes at least one processor; and a memory configured to store a program to be executed in the processor. The processor acquires bioinformation in a chronological order; derives outlier level parameters, the outlier level parameter indicating a level of inclusion of outliers of the bioinformation in pieces of bioinformation acquired within a first duration; derives correction terms based on the bioinformation after removal of the outliers of the bioinformation from pieces of bioinformation acquired within a second duration that is longer than the first duration; selects one or both of a first correction procedure and a second correction procedure based on the outlier level parameters, as a correction procedure, the first correction procedure using the correction terms, the second correction procedure involving interpolation irrelevant to the correction terms; and corrects the outliers of the bioinformation within the first duration by the selected correction procedure.
Abstract:
The present invention is to reduce the time required to detect an object after completion of the rotation of a head or a body of a robot. A robot 100 includes a camera 111, and a control unit 127 which determines an overlapping area between a first image captured with the camera 111 at first timing and a second image captured with the camera 111 at second timing later than the first timing to detect an object included in an area of the second image other than the determined overlapping area.
Abstract:
A voice input unit has predetermined directivity for acquiring a voice. A sound source arrival direction estimation unit operating as a first direction detection unit detects a first direction, which is an arrival direction of a signal voice of a predetermined target, from the acquired voice. Moreover, a sound source arrival direction estimation unit operating as a second direction detection unit detects a second direction, which is an arrival direction of a noise voice, from the acquired voice. A sound source separation unit, a sound volume calculation unit, and a detection unit having an S/N ratio calculation unit detect a sound source separation direction or a sound source separation position, based on the first direction and the second direction.
Abstract:
A mobile apparatus according to the present embodiment includes a voice input unit configured to detect a signal from a user. In a case where the voice input unit detects a signal from the user and as a detection result of the signal from the user, it is determined that there is a signal from the user, the mobile apparatus performs sound source localization and specifies a location or direction in which the detected signal from the user is given. In a case where it is determined that the mobile apparatus is not able to move to the location where the signal from the user is given, a voice output unit configured to output a voice signal, a driving unit configured to move the mobile apparatus, and a light emitting unit configured to emit light performs predetermined control.
Abstract:
A feature extractor extracts feature quantities from a digitized speech signal and outputs the feature quantities to a likelihood calculator. A distance determiner determines the distance between a user providing speech and a speech input unit. The likelihood calculator selects registered expressions for speech recognition from a recognition target table based on the determined distance, to be used in calculation of likelihoods at the likelihood calculator. The likelihood calculator calculates likelihoods for the selected registered expressions based on the feature quantities extracted by the feature extractor, and outputs one of the registered expressions having the maximum likelihood as a result of speech recognition.
Abstract:
A mobile apparatus according to the present embodiment includes a voice input unit configured to detect a signal from a user. In a case where the voice input unit detects a signal from the user and as a detection result of the signal from the user, it is determined that there is a signal from the user, the mobile apparatus performs sound source localization and specifies a location or direction in which the detected signal from the user is given. In a case where it is determined that the mobile apparatus is not able to move to the location where the signal from the user is given, a voice output unit configured to output a voice signal, a driving unit configured to move the mobile apparatus, and a light emitting unit configured to emit light performs predetermined control.
Abstract:
A speech determiner determines whether or not a target individual is speaking when facial images of the target individual are captured. An emotion estimator estimates the emotion of the target individual using the facial images of the target individual, on the basis of the determination results of the speech determiner.
Abstract:
A bioinformation acquiring apparatus includes at least one processor and a memory configured to store a program to be executed in the processor. The processor acquires a waveform signal representing vibrations of a target, the vibrations resulting from heartbeats of the target; extracts provisional heartbeat timings from the acquired waveform signal based on a first time window; the provisional heartbeat timings indicating provisional values of heartbeat timings being timings at which the heartbeats of the target occur; acquires corrective peak timings from the acquired waveform signal based on a second time window having a shorter time length than the first time window, each of the corrective peak timings serving as a discrete correction unit for correction of the provisional heartbeat timings; corrects the extracted provisional heartbeat timings into definitive heartbeat timings based on the acquired corrective peak timings; and acquires bioinformation on the heartbeats of the target based on the corrected heartbeat timings.
Abstract:
An imaging apparatus generates a 3D model using a photographed image of a subject and generates a 3D image based on the 3D model. When a corresponding point corresponding to a point forming the 3D model does not form a 3D model generated using a photographed image photographed at a different photographing position, the imaging apparatus determines that the point is noise, and removes the point determined as noise from the 3D model. The imaging apparatus generates a 3D image based on the 3D model from which the point determined as noise is removed.
Abstract:
A CAP detection device configured to acquire pulse wave data of a subject, derive a baseline of the data and an envelope of the baseline, identify a local maximum point of the envelope and determine, as CAP candidate points, a first local maximum point of the baseline before the local maximum point of the envelope and a second local maximum point of the baseline after the local maximum point, and identify, for each CAP candidate point, a third local maximum point of the baseline before the CAP candidate point and a local minimum point of the baseline between the CAP candidate point and the third local maximum point and detect the CAP candidate point as a CAP based on an evaluation value obtained from a difference between the CAP candidate point and the third local maximum point and a difference between the CAP candidate point and the local minimum point.