Abstract:
Included are: a captured image acquiring unit to acquire a captured image obtained by imaging an occupant; a temperature image acquiring unit to acquire a temperature image indicating a temperature of a surface of a body of the occupant measured in a non-contact manner; a motion detection unit to detect a motion of the occupant on the basis of the captured image; a temperature detection unit to detect a temperature of a hand of the occupant on the basis of the temperature image; and an awakening level estimating unit to estimate an awakening level of the occupant on the basis of the motion of the occupant detected by the motion detection unit and the temperature of the hand of the occupant detected by the temperature detection unit.
Abstract:
A tool shape measurement device includes: a contour detection unit that detects a tool contour from an image of a rotating tool that is taken; an axis direction calculation unit that calculates a tool axis direction that is an axis direction of the rotating tool on the basis of the tool contour; a tool diameter measurement unit that calculates an apparent tool diameter of the rotating tool on an imaging surface on the basis of a calibrated positional and postural relationship between an imaging device and the rotating tool, the tool axis direction, and the tool contour; and a tool diameter correction unit that calculates a distance between the imaging device and the rotating tool using the tool axis direction, and corrects the apparent tool diameter to an actual tool diameter by correcting distortion in the tool contour on the basis of the distance.
Abstract:
An awakening level estimation device includes: processing circuitry configured to acquire two or more types of occupant state information different from each other, the occupant state information indicating a current state value of an occupant; acquire occupant basic state information indicating a basic state value of the occupant when the occupant is in an awakening state; acquire difference information indicating a difference between the current state value and the basic state value, estimate an awakening level of the occupant on the basis of the two or more types of difference information, and input the two or more types of difference information to a learned model corresponding to a leaning result by machine learning, and generating and outputting awakening level information indicating an awakening level on the basis of an estimation result output from the leaned model.
Abstract:
A pupil detection device includes an eye area image obtaining unit that obtains image data representing an eye area image in a captured image obtained by a camera; a luminance gradient calculating unit that calculates luminance gradient vectors corresponding to respective individual image units in the eye area image, using the image data; an evaluation value calculating unit that calculates evaluation values corresponding to the respective individual image units, using the luminance gradient vectors; and a pupil location detecting unit that detects a pupil location in the eye area image, using the evaluation values.
Abstract:
A three-dimensional position estimation device includes: a feature point extracting unit for detecting an area corresponding to the face of an occupant in an image captured by a camera for imaging a vehicle interior and extracting a plurality of feature points in the detected area; an inter-feature-point distance calculating unit for calculating a first inter-feature-point distance that is a distance between distance-calculating feature points among the plurality of feature points; a face direction detecting unit for detecting the face direction of the occupant; a head position angle calculating unit for calculating a head position angle indicating the position of the head of the occupant with respect to an imaging axis of the camera; an inter-feature-point distance correcting unit for correcting the first inter-feature-point distance to a second inter-feature-point distance that is a distance between distance-calculating feature points in a state where portions of the head corresponding to the distance-calculating feature points are arranged along a plane parallel to an imaging plane of the camera using a result detected by the face direction detecting unit and the head position angle; and a three-dimensional position estimating unit for estimating the three-dimensional position of the head using the head position angle, the second inter-feature-point distance, and a reference inter-feature-point distance.
Abstract:
A first light projector projects first slit light that spreads in the width direction of a vehicle in a direction other than a direction parallel to a contact ground surface. A second light projector projects second slit light that spreads in the width direction of the vehicle in a direction parallel to the contact ground surface. An obstacle detection unit detects an obstacle using a captured image of an area surrounding the vehicle where the first slit light and the second slit light are projected.
Abstract:
An awakening effort motion estimation device includes processing circuitry configured to acquire a captured image obtained by imaging an occupant's face in a vehicle; detect, on a basis of the captured image having been acquired, two reference points for estimating a motion of the occupant's mouth, the two reference points including one point on a mask worn by the occupant, and the other point being a point based on a feature point of the occupant's face or a point different from the one point on the mask in the captured image; calculate a reference point distance between the detected two reference points; and estimate whether or not the occupant is performing an awakening effort motion by moving his or her mouth depending on whether or not the calculated reference point distance satisfies an awakening effort estimating condition.
Abstract:
A storage device stores capturing conditions containing a viewpoint position and a shooting direction of the camera, model data containing a shape of a segmented mirror, and feature point data representing a positional relationship among a viewpoint position and feature points. A camera captures the segmented mirror having a reflective surface to obtain a captured image containing at least a part of the reflective surface. A marker object has feature points, and is fixed at a predetermined position with respect to the camera. A feature-point extracting unit extracts multiple feature points from a captured image when the feature points are reflected in the reflective surface, and determines positions of the feature points within the captured image. A position measuring unit calculates a position of the segmented mirror, based on the capturing conditions, the model data, the feature point data, and the positions of the feature points within the captured image.
Abstract:
A biological information acquisition device includes a detection-value acquisition unit to acquire a detection value from a non-contact biometric sensor, a vital measurement unit to measure a vital sign of a target person (TP) using the detection value, an image-data acquisition unit to acquire image data indicating an image captured by a camera, an image processing unit to perform at least one of a state estimation process of estimating a state of the target person, an attribute estimation process of estimating an attribute of the target person, or a personal identification process of identifying the target person by performing image processing on the image captured including the target person, and a parameter setting unit to set a parameter in measuring the vital sign in accordance with a result of the image processing.
Abstract:
A passenger state detection device (100) includes: a correction parameter setting unit (30) for setting a correction parameter for a captured image captured by a camera (2) for capturing a vehicle interior for each of detection items in passenger state detecting process including the multiple detection items using at least one of a feature amount in a face part area that corresponds to a passenger's face part in the captured image or a feature amount in a structure area that corresponds to a structure in the vehicle interior in the captured image; and an image correcting unit (40) for correcting the captured image for each of the detection items in the passenger state detecting process using the correction parameter set by the correction parameter setting unit (30).