Abstract:
A voice input unit has predetermined directivity for acquiring a voice. A sound source arrival direction estimation unit operating as a first direction detection unit detects a first direction, which is an arrival direction of a signal voice of a predetermined target, from the acquired voice. Moreover, a sound source arrival direction estimation unit operating as a second direction detection unit detects a second direction, which is an arrival direction of a noise voice, from the acquired voice. A sound source separation unit, a sound volume calculation unit, and a detection unit having an S/N ratio calculation unit detect a sound source separation direction or a sound source separation position, based on the first direction and the second direction.
Abstract:
A mobile apparatus according to the present embodiment includes a voice input unit configured to detect a signal from a user. In a case where the voice input unit detects a signal from the user and as a detection result of the signal from the user, it is determined that there is a signal from the user, the mobile apparatus performs sound source localization and specifies a location or direction in which the detected signal from the user is given. In a case where it is determined that the mobile apparatus is not able to move to the location where the signal from the user is given, a voice output unit configured to output a voice signal, a driving unit configured to move the mobile apparatus, and a light emitting unit configured to emit light performs predetermined control.
Abstract:
A video is generated from multi-aspect images. For each frame of the output video, a provisional video is created with provisional images which are used to designate a position of a target object on which to focus and to acquire a depth map showing a depth of the object, and which are reconstructed using default values from LFIs as the frames. Then, information designating coordinates of focal positions of the target object on the provisional video is acquired. A list creator acquires the depth value of the target object using a depth map created from the LFI of the current frame, and records this in a designation list after obtaining the reconstruction distance from the depth value. A corrector corrects the designation list. A main video creator creates and outputs a main video with reconstructed images focused at a focal length designated by the post-correction designation list as frames.
Abstract:
An image processing device is provided with an acquiring unit configured to acquire a face image and, a control unit configured to specify a face direction in the face image acquired by the acquiring unit and add, based on the specified face direction, a picture expressing a contour of a face component to the face image.
Abstract:
A CAP detection device configured to acquire pulse wave data of a subject, derive a baseline of the data and an envelope of the baseline, identify a local maximum point of the envelope and determine, as CAP candidate points, a first local maximum point of the baseline before the local maximum point of the envelope and a second local maximum point of the baseline after the local maximum point, and identify, for each CAP candidate point, a third local maximum point of the baseline before the CAP candidate point and a local minimum point of the baseline between the CAP candidate point and the third local maximum point and detect the CAP candidate point as a CAP based on an evaluation value obtained from a difference between the CAP candidate point and the third local maximum point and a difference between the CAP candidate point and the local minimum point.
Abstract:
A more endearing robot includes an operation unit that causes the robot to operate, a viewing direction determiner that determines whether a viewing direction of a predetermined target is toward the robot or not, and an operation controller that controls the operation unit based on a result of determination by the viewing direction determiner.
Abstract:
A robot determines whether a voice is a voice emanated directly from an actual person or is a voice output from a speaker of an electronic device.A controller of a robot detects a voice by means of microphones, determines whether or not a voice generating source of the detected voice is a specific voice generating source, and controls, based on a result of the determination, the robot by means of a neck joint and a chassis.
Abstract:
A drawing apparatus includes a drawing unit that forms a nail design on a nail of an object by performing a plurality of processes and a processor that extracts a feature value of the object and controls the drawing unit.The processor acquires information on a performed process with respect to the nail of one object, whose nail is provided to be formed with the nail design, based on process management information in which the feature value with respect to each of a plurality of the objects and an information on the performed process with respect to the nail of each of the plurality of the objects are registered in association with each other and the feature value with respect to the one object, and determines a specific process to be performed to the nail of the one object, and causes the drawing unit to perform the specific process.
Abstract:
A feature extractor extracts feature quantities from a digitized speech signal and outputs the feature quantities to a likelihood calculator. A distance determiner determines the distance between a user providing speech and a speech input unit. The likelihood calculator selects registered expressions for speech recognition from a recognition target table based on the determined distance, to be used in calculation of likelihoods at the likelihood calculator. The likelihood calculator calculates likelihoods for the selected registered expressions based on the feature quantities extracted by the feature extractor, and outputs one of the registered expressions having the maximum likelihood as a result of speech recognition.
Abstract:
A mobile apparatus according to the present embodiment includes a voice input unit configured to detect a signal from a user. In a case where the voice input unit detects a signal from the user and as a detection result of the signal from the user, it is determined that there is a signal from the user, the mobile apparatus performs sound source localization and specifies a location or direction in which the detected signal from the user is given. In a case where it is determined that the mobile apparatus is not able to move to the location where the signal from the user is given, a voice output unit configured to output a voice signal, a driving unit configured to move the mobile apparatus, and a light emitting unit configured to emit light performs predetermined control.