Abstract:
A driver monitoring camera system for a vehicle includes a camera system and an optical beam shaper. The camera system has an exit field of view central axis oriented more vertical than horizontal with respect to a set of orthogonal x, y and z axes, where the z-axis defines a direction line indicative of a vertical direction. The optical beam shaper is disposed in optical communication with the camera system and optically between the camera system and an operator of the vehicle. The optical beam shaper is structurally configured to direct an image of a face of an operator of the vehicle toward the exit field of view of the camera system, and is structurally configured to compress the image in the vertical direction to define a compressed image having a height to width aspect ratio of less an uncompressed height to width aspect ratio of the image.
Abstract:
A method for determining an Eyes-Off-The-Road (EOTR) condition exists includes capturing image data corresponding to a driver from a monocular camera device. A detection of whether the driver is wearing eye glasses based on the image data using an eye glasses classifier. When it is detected that the driver is wearing eye glasses, a driver face location is detected from the captured image data and it is determined whether the EOTR condition exists based on the driver face location using an EOTR classifier.
Abstract:
A method for monitoring a vehicle operator can be executed by a controller and includes the following steps: (a) receiving image data of a vehicle operator's head; (b) tracking facial feature points of the vehicle operator based on the image data; (c) creating a 3D model of the vehicle operator's head based on the facial feature points in order to determine a 3D position of the vehicle operator's head; (d) determining a gaze direction of the vehicle operator based on a position of the facial feature points and the 3D model of the vehicle operator's head; (e) determining a gaze vector based on the gaze direction and the 3D position of the vehicle operator's head; and (f) commanding an indicator to activate when the gaze vector is outside a predetermined parameter.
Abstract:
A method for detecting an eyes-off-the-road condition based on an estimated gaze direction of a driver of a vehicle includes monitoring facial feature points of the driver within image input data captured by an in-vehicle camera device. A location for each of a plurality of eye features for an eyeball of the driver is detected based on the monitored facial features. A head pose of the driver is estimated based on the monitored facial feature points. The gaze direction of the driver is estimated based on the detected location for each of the plurality of eye features and the estimated head pose.
Abstract:
A driver monitoring camera system for a vehicle includes a camera system and an optical beam shaper. The camera system has an exit field of view central axis oriented more vertical than horizontal with respect to a set of orthogonal x, y and z axes, where the z-axis defines a direction line indicative of a vertical direction. The optical beam shaper is disposed in optical communication with the camera system and optically between the camera system and an operator of the vehicle. The optical beam shaper is structurally configured to direct an image of a face of an operator of the vehicle toward the exit field of view of the camera system, and is structurally configured to compress the image in the vertical direction to define a compressed image having a height to width aspect ratio of less an uncompressed height to width aspect ratio of the image.
Abstract:
A deployable camera system for a vehicle includes a body defining a cavity therein, and a camera including a housing having an exterior surface. The camera is reversibly transitionable between a stowed position in which the camera is recessed into the cavity and the exterior surface is substantially flush with the body, and a deployed position wherein the camera protrudes from the cavity and the exterior surface is not substantially flush with the body. The deployable camera system includes a first shape memory alloy element transitionable between a first state and a second state in response to a first thermal activation signal.
Abstract:
A driver alert system includes a computer processor disposed in a vehicle. The computer processor is configured to receive driver attention data over a vehicle network during a driving event. The computer processor executes logic to process the driver attention data and evaluate the driver attention data for a triggering event. The system also includes a steering wheel unit disposed in the vehicle and lights that are integrated on a front windshield-facing surface of a steering wheel of the steering wheel unit. The lights are positioned at an angle to reflect light off of a front windshield of the vehicle. The system also includes a controller disposed in the steering wheel unit. The controller is communicatively coupled to the lights and the vehicle network. The controller receives a request from the computer processor to activate the lights when the triggering event has occurred.
Abstract:
Systems and methods for optimizing operator-state detection including tracking position of an operator-facing camera are described. Systems and methods include receiving a first image captured by an operator-facing camera, detecting a first position of the operator-facing camera with respect to the calibration object, ascertaining the first position with respect to at least one fiducial marker within a passenger compartment of a vehicle, capturing, via controller, a second image using the operator-facing camera, determining the second image is captured by the operator-facing camera from a second position with respect to the at least one fiducial marker, and analyzing, based on determining the second image is captured by the operator-facing camera from the second position, the second image to identify facial features of the operator. The receiving, detecting, ascertaining, determining, and analyzing are performed via a controller. The first image includes a calibration object disposed at a predetermined location.
Abstract:
Systems and methods for optimizing operator-state detection including tracking position of an operator-facing camera are described. Systems and methods include receiving a first image captured by an operator-facing camera, detecting a first position of the operator-facing camera with respect to the calibration object, ascertaining the first position with respect to at least one fiducial marker within a passenger compartment of a vehicle, capturing, via controller, a second image using the operator-facing camera, determining the second image is captured by the operator-facing camera from a second position with respect to the at least one fiducial marker, and analyzing, based on determining the second image is captured by the operator-facing camera from the second position, the second image to identify facial features of the operator. The receiving, detecting, ascertaining, determining, and analyzing are performed via a controller. The first image includes a calibration object disposed at a predetermined location.
Abstract:
A method for monitoring a vehicle operator can be executed by a controller and includes the following steps: (a) receiving image data of a vehicle operator's head; (b) tracking facial feature points of the vehicle operator based on the image data; (c) creating a 3D model of the vehicle operator's head based on the facial feature points in order to determine a 3D position of the vehicle operator's head; (d) determining a gaze direction of the vehicle operator based on a position of the facial feature points and the 3D model of the vehicle operator's head; (e) determining a gaze vector based on the gaze direction and the 3D position of the vehicle operator's head; and (f) commanding an indicator to activate when the gaze vector is outside a predetermined parameter.