Abstract:
An image processing device (10) comprises the following: an image acquisition unit (1) for acquiring an image in which markers for calibration are captured; an edge detection unit (2) for detecting the edges of the markers in the image; a polygon generating unit (3) for estimating a plurality of straight lines on the basis of the edges and generating a virtual polygon region that is surrounded by the plurality of straight lines, the generation being carried out in the image in a region thereof including regions other than those where markers are installed; and a camera parameter calculation unit (4) for calculating camera parameters on the basis of the characteristic amount, with respect to the image, of the virtual polygon region and the characteristic amount, with respect to real space, of the virtual polygon region.
Abstract:
A display control device includes: an acquirer that receives inclination information on an occupant's head in a mobile body from a detector; and a controller that controls a displayer to generate a predetermined image representing a presentation image superimposed on an object as viewed from the occupant, based on recognition results of the object and the inclination information. When the object is recognized, with the head not inclined, the controller causes the displayer to generate a first predetermined image representing a first presentation image including one or more first lines in which a line segment connecting both ends is horizontal, or with the head inclined, the controller causes the displayer to generate a second predetermined image representing a second presentation image including a second line which is obtained by inclining at least one of the first lines by an angle according to the inclination of the occupant's head.
Abstract:
The driving assistance device acquires, from an autonomous driving controller that determines an action of a vehicle during autonomous driving of the vehicle, action information indicating a first action that the vehicle is caused to execute. The driving assistance device acquires, from a detector that detects a surrounding situation and a travel state of the vehicle, detection information indicating a detection result. The driving assistance device determines a second action which is executable in place of the first action, based on the detection information. The driving assistance device generates a first image representing the first action and a second image representing the second action. The driving assistance device outputs the first image and the second image to a notification device such that the first image and the second image are displayed within a fixed field of view of a driver of the vehicle.
Abstract:
A display control apparatus includes a receiver that receives a recognition result of a change in environment around a vehicle, and a controller that controls an image generation apparatus to generate an image corresponding to a presentation image to be displayed on the display medium. The controller generates and outputs a control signal to the image generation apparatus to control the image generation apparatus based on the recognition result so as to deform the presentation image radially on the display medium such that the deformed presentation image moves toward at least one of sides of the display medium and disappears sequentially to the outside of the display medium across edges of the display medium.
Abstract:
Provided is a technology for improving accuracy in determining the next action. Travel history generator generates, for each driver, a travel history associating an environmental parameter indicating a travel environment through which a vehicle has previously traveled with an action selected by the driver in response to the environmental parameter. Acquisition unit acquires a travel history similar to a travel history of a current driver from among travel histories generated by travel history generator. Driver model generator generates a driver model based on the travel history acquired by acquisition unit. Determination unit determines the next action based on the driver model generated by driver model generator) and an environmental parameter indicating a current travel environment of the vehicle.
Abstract:
A state determination device includes a calculator that receives multiple eye region images captured at different timings in a time interval from when a person opening eyes closes the eyes to when the person opens the eyes next and calculates a luminance value relating to multiple pixels included in each of the eye region images and a determiner, wherein the determiner calculates a time interval from a first time point when the luminance value relating to the pixels reaches a predetermined first luminance value for the first time to a second time point when the luminance value relating to the pixels reaches a second luminance value after the first time point; if a time interval from the first time point to the second time point is a first time interval, the determiner determines that the person is in a first state; if the time interval from the first time point to the second time point is a second time interval which is shorter than the first time interval, the determiner determines that the person is in a second state which differs from the first state; and the determiner outputs a determination result.
Abstract:
A drowsiness prevention device includes a psychological state estimator and a controller. The psychological state estimator estimates a psychological state of an occupant, based on a state of the occupant detected by a detection device. The controller causes an output device to output a first warning and a second warning which are for alerting the occupant. The psychological state estimator estimates a first psychological state of the occupant which is before the first warning is output and a second psychological state of the occupant which is after the first warning is output. The controller determines details of a second warning, based on only the second psychological state or based on both the first psychological state and the second psychological state.
Abstract:
A monitoring target management unit specifies a monitoring target based on vehicle peripheral information acquired from a vehicle exterior image sensor mounted on a vehicle. A display controller highlights the monitoring target specified by the monitoring target management unit. An operation signal input unit receives a user input for updating the monitoring target specified by the monitoring target management unit. The monitoring target management unit updates a monitoring target when the operation signal input unit receives a user input.
Abstract:
An equipment control device includes a receiver that receives sensing result information including a position, a shape, and a movement of a predetermined object and including a position of an eye point of a person, and a controller that, when the sensing result information indicates that the eye point, equipment placed at a predetermined position, and the object align and that the object is in a predetermined shape corresponding to the equipment in advance, determines command information causing to operate the equipment in accordance with the movement of the object in the predetermined shape to an equipment operating device.
Abstract:
A calibration device capable of calculating an installation parameter of a camera without storing graphical features of road marking in advance or without requiring another technique. An acquiring unit acquires images captured by the camera before and after a vehicle moves, and an extracting unit extracts two feature points from each of the images captured before and after the vehicle moves. A calculating unit calculates a camera installation parameter on the basis of a positional relationship of coordinates of the two feature points in the image before the vehicle moves and coordinates of the two feature points after the vehicle moves that correspond to the feature points before the vehicle moves.