Abstract:
Provided is a technology for improving accuracy in determining the next action. Travel history generator generates, for each driver, a travel history associating an environmental parameter indicating a travel environment through which a vehicle has previously traveled with an action selected by the driver in response to the environmental parameter. Acquisition unit acquires a travel history similar to a travel history of a current driver from among travel histories generated by travel history generator. Driver model generator generates a driver model based on the travel history acquired by acquisition unit. Determination unit determines the next action based on the driver model generated by driver model generator) and an environmental parameter indicating a current travel environment of the vehicle.
Abstract:
A display control apparatus includes: an input unit that receives state information indicating at least one of a state of a moving object, a state of inside of the moving object, and a state of outside of the moving object; and a controller that controls a displayer, which generates a predetermined image and outputs the predetermined image onto a display medium, based on the state information. The predetermined image shows a presentation image including text, when displayed on the display medium. The controller causes the displayer to generate a first predetermined image showing a first presentation image including first text corresponding to a predetermined event, determines whether the at least one state has made a predetermined change, based on the state information, and causes the displayer to generate a second predetermined image showing a second presentation image including second text corresponding to the predetermined event.
Abstract:
A method of controlling a display control apparatus in a display system includes causing a display unit to generate a first certain image indicating a first presentation image to be overlapped on a certain object in display on a display medium on the basis of the recognized certain object; determining a wiping area wiped by a wiper on the display medium on the basis of detected position after the first presentation image is displayed on the display medium; and causing the display unit to generate a second certain image indicating a second presentation image resulting from deletion of a portion corresponding to the wiping area in the first presentation image in the display on the display medium.
Abstract:
A driving assistance device to be installed on a vehicle is provided. The driving assistance device receives stop-behavior information of the vehicle from an automatic-driving control device. Inquiry information for notifying an occupant on whether a possibility of collision between the vehicle and a person is to be excluded from decision-making of the automatic-driving control device is output when a distance between the person and a point at which the person is predicted to end crossing is greater than or equal to a first threshold and a speed of the person is less than or equal to a second threshold. A command to exclude the possibility of collision between the vehicle and the person from the decision-making is output to the automatic-driving control device when a response signal of a response operation is performed by the occupant.
Abstract:
A sensor detects an obstacle to a vehicle. An alert device provides alert information for inquiring a passenger to determine whether to continue automatic driving when a distance between the obstacle and an end of the lane is equal to or larger than a width necessary for travel based on a width of the vehicle. An input device receives a passenger's manipulation to continue automatic driving in response the inquiry provided from the alert device. A command output unit outputs a command to ease lane-based restriction on continuation of automatic driving to an automatic driving control device when the manipulation in response is received.
Abstract:
Behavior information input unit (54) receives stop-behavior information about vehicle (100) from automatic-driving control device (30). Image-and-sound output unit (51) outputs inquiry information for inquiring of an occupant whether a possibility of collision between an obstacle and vehicle (100) is to be excluded from a determination object in automatic-driving control device (30) to notification device (2), when a distance from one point on a predictive movement route of the obstacle to the obstacle is greater than or equal to a first threshold, and a speed of the obstacle is less than or equal to a second threshold. Operation signal input unit (50) receives a response signal for excluding the collision possibility from the determination object. Command output unit (55) outputs a command to exclude the collision possibility from the determination object to automatic-driving control device (30).
Abstract:
An image display system includes a first processor, a second processor, and a comparator. The first processor acquires a behavior estimation result of a vehicle, and generates future position information after a predetermined time passes of the vehicle based on the behavior estimation result. The second processor acquires present information about the vehicle, and generates present position information on the vehicle and a peripheral object based on the acquired information. The comparator compares the future position information on the vehicle and the present position information on the vehicle and the peripheral object, and generates present image data indicating present positions of the vehicle and the peripheral object and future image data indicating future positions of the vehicle and the peripheral object. Further, the comparator allows a notification device to display a present image based on the present image data and a future image based on the future image data together.
Abstract:
A driver performs switching to a manual driving mode in a state that the driver is suitable for a driving operation. In a vehicle that is driven by a plurality of driving modes including a self-driving mode to be performed by a self-driving control unit and a manual driving mode in which a driver performs a part of or all of a driving operation, before the manual driving mode is started, information for presenting a request for operating by the driver from a user interface unit to the driver is output to the user interface unit. The input unit receives a signal based on the operation by the driver. A notification unit notifies a self-driving control unit of a switching signal for instructing switching to the manual driving mode when a difference between a value obtained from the signal based on the operation by the driver, the signal having been input from the input unit and a reference value according to the operation request is within an allowable range.
Abstract:
A visual field calculation apparatus capable of accurately calculating a visual field range of a user without using a complex configuration is provided. The visual field calculation apparatus includes a saccade detector that detects a saccade on the basis of a first gaze direction detected at a first timing and a second gaze direction detected at a second timing, a saccade speed calculator that calculates speed of the saccade on the basis of a time difference between the first timing and the second timing, the first gaze direction, and the second gaze direction, and a visual field range calculator that calculates a displacement vector of a saccade whose speed exceeds a first threshold, and calculates an area including a final point of the displacement vector as the visual field range of the user.
Abstract:
A computer performs a process to determine whether an object is a predetermined object and a process to control a display unit to generate a first image based on a result of recognized object at a first timing and generate a second image based on a result of the recognized object at a second timing that is later than the first timing if the predetermined object is determined. The first image is an image formed by a pattern of markers representing a skeleton of the object, and the second image is an image formed by a pattern of markers corresponding to the pattern of markers in the first image, and the position of at least one marker of the pattern of markers in the first image differs from the position of the corresponding marker in the second image.