Abstract:
A detection device includes an image segmenter and a detector. The image segmenter cuts out a first region image and a second region image from an image of a vehicle interior that is acquired from an imaging device. The first region image shows at least a portion of a first part of a body of an occupant. The second region image shows at least a portion of a region of the vehicle interior around the first part or at least a portion of a second part of the body of the occupant. The detector detects an orientation of the first part of the body of the occupant based on a feature amount of the first region image and a feature amount of the second region image.
Abstract:
A detection device includes: an image acquirer that acquires an image of an interior of a vehicle, including a predetermined space; an action determiner that determines whether or not any one of a first action of placing an article or storing the article in the predetermined space and a second action of taking the article or taking out the article from the predetermined space has been performed and which of the first and second actions has been performed when it is determined that any of the first and second actions has been performed, on the basis of the acquired image; an article manager that manages an existence status of the article on the basis of the determination result of the action determiner; and a left-behind article determiner that determines whether or not an article left behind exists in the predetermined space on the basis of the existence status of the article.
Abstract:
An image processing device (10) comprises the following: an image acquisition unit (1) for acquiring an image in which markers for calibration are captured; an edge detection unit (2) for detecting the edges of the markers in the image; a polygon generating unit (3) for estimating a plurality of straight lines on the basis of the edges and generating a virtual polygon region that is surrounded by the plurality of straight lines, the generation being carried out in the image in a region thereof including regions other than those where markers are installed; and a camera parameter calculation unit (4) for calculating camera parameters on the basis of the characteristic amount, with respect to the image, of the virtual polygon region and the characteristic amount, with respect to real space, of the virtual polygon region.
Abstract:
A determination unit determines whether a driving behavior of a vehicle is performed by automatic driving control or manual driving control. A generation unit generates presentation instructing information for presenting the driving behavior by the automatic driving control externally from the vehicle when the determination unit determines that the driving behavior of the vehicle is performed by the automatic driving control, and generates presentation instructing information for presenting the driving behavior by the manual driving control externally from the vehicle when the determination unit determines that the driving behavior of the vehicle is performed by the manual driving control. An output unit outputs presentation instructing information generated by the generation unit.
Abstract:
A driver performs switching to a manual driving mode in a state that the driver is suitable for a driving operation. In a vehicle that is driven by a plurality of driving modes including a self-driving mode to be performed by a self-driving control unit and a manual driving mode in which a driver performs a part of or all of a driving operation, before the manual driving mode is started, information for presenting a request for operating by the driver from a user interface unit to the driver is output to the user interface unit. The input unit receives a signal based on the operation by the driver. A notification unit notifies a self-driving control unit of a switching signal for instructing switching to the manual driving mode when a difference between a value obtained from the signal based on the operation by the driver, the signal having been input from the input unit and a reference value according to the operation request is within an allowable range.
Abstract:
An image processor includes an image converter. The image converter transforms data of an image that is photographed with a camera for photographing a seat, based on a transformation parameter that is calculated in accordance with a camera-position at which the camera is disposed. The image converter outputs the thus-transformed data of the image. The transformation parameter is a parameter for transforming the data of the image such that an appearance of the seat depicted in the image is approximated to a predetermined appearance of the seat.
Abstract:
An identification device includes an inputter which receives image information of a person photographed by a camera, and a controller which identifies the person and detects parts, which are at least a head and hands, of the person based on the image information, thereby identifying a motion of the person based on the identified person, the detected parts, and a motion model in which a motion of a person is registered for every person, and outputs the identified motion of the person.
Abstract:
A computer performs a process to determine whether an object is a predetermined object and a process to control a display unit to generate a first image based on a result of recognized object at a first timing and generate a second image based on a result of the recognized object at a second timing that is later than the first timing if the predetermined object is determined. The first image is an image formed by a pattern of markers representing a skeleton of the object, and the second image is an image formed by a pattern of markers corresponding to the pattern of markers in the first image, and the position of at least one marker of the pattern of markers in the first image differs from the position of the corresponding marker in the second image.
Abstract:
A calibration device capable of calculating an installation parameter of a camera without storing graphical features of road marking in advance or without requiring another technique. An acquiring unit acquires images captured by the camera before and after a vehicle moves, and an extracting unit extracts two feature points from each of the images captured before and after the vehicle moves. A calculating unit calculates a camera installation parameter on the basis of a positional relationship of coordinates of the two feature points in the image before the vehicle moves and coordinates of the two feature points after the vehicle moves that correspond to the feature points before the vehicle moves.