Abstract:
A method for determining an Eyes-Off-The-Road (EOTR) condition exists includes capturing image data corresponding to a driver from a monocular camera device. A detection of whether the driver is wearing eye glasses based on the image data using an eye glasses classifier. When it is detected that the driver is wearing eye glasses, a driver face location is detected from the captured image data and it is determined whether the EOTR condition exists based on the driver face location using an EOTR classifier.
Abstract:
A detection system for a host vehicle includes a camera, global positioning system (“GPS”) receiver, compass, and electronic control unit (“ECU”). The camera collects polarimetric image data forming an imaged drive scene inclusive of a road surface illuminated by the Sun. The GPS receiver outputs a present location of the vehicle as a date-and-time-stamped coordinate set. The compass provides a directional heading of the vehicle. The ECU determines the Sun's location relative to the vehicle and camera using an input data set, including the present location and directional heading. The ECU also detects a specular reflecting area or areas on the road surface using the polarimetric image data and Sun's location, with the specular reflecting area(s) forming an output data set. The ECU then executes a control action aboard the host vehicle in response to the output data set.
Abstract:
A free space estimation and visualization system for a host vehicle includes a camera configured to collect red-green-blue (“RGB”)-polarimetric image data of drive environs of the host vehicle, including a potential driving path. An electronic control unit (“ECU”) receives the RGB-polarimetric image data and estimates free space in the driving path by processing the RGB-polarimetric image data via a run-time neural network. Control actions are taken in response to the estimated free space. A method for use with the visualization system includes collecting RGB and lidar data of target drive scenes and generating, via a first neural network, pseudo-labels of the scenes. The method includes collecting RGB-polarimetric data via a camera and thereafter training a second neural network using the RGB-polarimetric data and pseudo-labels. The second neural network is used in the ECU to estimate free space in the potential driving path.
Abstract:
A method for deblurring a blurred image includes dividing the blurred image into overlapping regions each having a size and an offset from neighboring overlapping regions along a first direction as determined by a period of a ringing artifact in the blurred image, or by obtained blur characteristics relating to the blurred image and/or attributable to the optical system, or by a detected cause capable of producing the blur characteristics, stacking the overlapping regions to produce a stacked output, wherein the overlapping regions are sequentially organized along the first direction, convolving the stacked output through a first convolutional neural network (CNN) to produce a first CNN output having reduced blur as compared to the stacked output, and assembling the first CNN output into a re-assembled image, and processing the re-assembled image through a second CNN to produce a deblurred image having reduced residual artifacts as compared to the re-assembled image.
Abstract:
A system and method may include capturing a multi-channel polarimetric image and a multi-channel RGB image of a scene by a color polarimetric imaging camera. A multi-channel hyperspectral image may be synthesized from the multi-channel RGB image and concatenated with the multi-channel polarimetric image to create an integrated polarimetric-hyperspectral image. Scene properties within the integrated polarimetric-hyperspectral image may be disentangled.
Abstract:
A system and method may include capturing a multi-channel polarimetric image and a multi-channel RGB image of a scene by a color polarimetric imaging camera. A multi-channel hyperspectral image may be synthesized from the multi-channel RGB image and concatenated with the multi-channel polarimetric image to create an integrated polarimetric-hyperspectral image. Scene properties within the integrated polarimetric-hyperspectral image may be disentangled.
Abstract:
The present application relates to a method and apparatus for generating a three-dimensional point cloud generation using a polarimetric camera in a drive assistance system equipped vehicle including a camera configured to capture a color image for a field of view and a polarimetric data of the field of view, a processor configured to perform a neural network function in response to the color image and the polarimetric data to generate a depth map of the field of view, and a vehicle controller configured a perform an advanced driving assistance function and to control a vehicle movement in response to the depth map.
Abstract:
Systems and methods to perform image-based three-dimensional (3D) lane detection involve obtaining known 3D points of one or more lane markings in an image including the one or more lane markings. The method includes overlaying a grid of anchor points on the image. Each of the anchor points is a center of i concentric circles. The method also includes generating an i-length vector and setting an indicator value for each of the anchor points based on the known 3D points as part of a training process of a neural network, and using the neural network to obtain 3D points of one or more lane markings in a second image.
Abstract:
A user-centric driving-support system for implementation at a vehicle of transportation. The system in various embodiments includes one or more vehicle sensors, such as a camera, a RADAR, and a LiDAR, and a hardware-based processing unit. The system further includes a non-transitory computer-readable storage device including an activity unit and an output-structuring unit. The activity unit, when executed by the hardware-based processing unit, determines, based on contextual input information, at least one of an alert-assessment output and a scene-awareness output, wherein the contextual input information includes output of the vehicle sensor. The output-structuring unit, when executed by the hardware-based processing unit, determines an action to be performed at the vehicle based on at least one of the alert-assessment output and the scene-awareness output determined by the activity unit. The technology in various implementations includes the storage device, alone, and user-centric driving-support processes performed using the device and other vehicle components.
Abstract:
Methods and system for detecting an object are provided. In one embodiment, a method includes: receiving, by a processor, image data from a single camera, the image data representing an image of scene; determining, by the processor, stixel data from the image data; detecting, by the processor, an object based on the stixel data; and selectively generating, by the processor, an alert signal based on the detected object.