Abstract:
A method of creating a shadow-reduced image from a captured image. An image of a scene exterior of a vehicle is captured by a vehicle-based image capture device. A first object profile of an object in the captured image is identified by a processor. A second object profile of the object is detected using a non-vision object detection device. Shadows in the captured image are removed by the processor as a function of the first object profile and the second object profile. A shadow reduced image is utilized in a vehicle-based application.
Abstract:
A method of controlling a color display screen aboard a motor vehicle includes performing, via a host computer, a color calibration test of a user of the motor vehicle in which the user is subjected to a calibrated set of color-coded test information. The method includes receiving a color perception response of the user to the calibrated set of color-coded test information via the host computer. Additionally, the method includes mapping a reduced visual gamut of the user via the host computer using the color perception response, and then commanding adjustment of user-specific color settings of the motor vehicle using the reduced visual gamut to thereby accommodate a color perception deficiency of the user.
Abstract:
Herein, a technology that facilitates the optimization of vision-language (VL) based classifiers with text embeddings is discussed. The technology includes tuning the VL-based classifier employing a pre-trained image encoder of a visual-language model (VLM) for imaging embedding of pre-classified images and a pre-trained textual encoder of the VLM for textual embedding of a set of differing textual sentences. The technology further includes determining an optimized set of differing textual sentences of a superset of textual sentences. The optimized set of differing textual sentences has a minimal classification loss of the VL-based classifier when classifying the pre-classified images.
Abstract:
A method for open-vocabulary query-based dense retrieval is provided. The method includes monitoring camera data including an image related to an object and referencing a set of queries, each of the queries describing a candidate object to be updated by a remote server device. An encoder of an open-vocabulary pre-trained vision-language model system is utilized to initialize a predefined embedding for each query, and a classifier is initialized by mapping the predefined embeddings to weights of the classifier. The method further includes applying a dense open-vocabulary image encoder on the camera data to create a mass of dense embeddings including a set of spatially-arranged embeddings for the image, each including a matrix including embedding vectors. The classifier is utilized by applying the classifier to the plurality of embedding vectors to classify the object within the operating environment as an identified object. The method further includes publishing the identified object.
Abstract:
A detection system for a host vehicle includes a camera, global positioning system (“GPS”) receiver, compass, and electronic control unit (“ECU”). The camera collects polarimetric image data forming an imaged drive scene inclusive of a road surface illuminated by the Sun. The GPS receiver outputs a present location of the vehicle as a date-and-time-stamped coordinate set. The compass provides a directional heading of the vehicle. The ECU determines the Sun's location relative to the vehicle and camera using an input data set, including the present location and directional heading. The ECU also detects a specular reflecting area or areas on the road surface using the polarimetric image data and Sun's location, with the specular reflecting area(s) forming an output data set. The ECU then executes a control action aboard the host vehicle in response to the output data set.
Abstract:
A system includes a transmitter of a radar system to transmit transmitted signals, and a receiver of the radar system to receive received signals based on reflection of one or more of the transmitted signals by one or more objects. The system also includes a processor to train a neural network with reference data obtained by simulating a higher resolution radar system than the radar system to obtain a trained neural network. The trained neural network enhances detection of the one or more objects based on obtaining and processing the received signals in a vehicle. One or more operations of the vehicle are controlled based on the detection of the one or more objects.
Abstract:
A method, system and vehicle that repetitively correct angle offsets in a synthetic aperture radar image of a vehicle while the vehicle is in motion by utilizing a radar system and a camera to determine accurate velocity of a measured object by matching angles of the object in the SAR image with angles of the object in the camera image, thereby reducing angle offsets of objects in the SAR image. The method includes obtaining an SAR image of another vehicle via a radar unit of the vehicle, obtaining a camera image of the other vehicle via a camera unit of the vehicle, determining an association between at least one object in the SAR image and a corresponding at least one object in the camera image, correcting a velocity estimation of the vehicle based on the determined association, and adjusting the SAR image based on the corrected velocity estimation.
Abstract:
The present application relates to a method and apparatus for generating a three-dimensional point cloud generation using a polarimetric camera in a drive assistance system equipped vehicle including a camera configured to capture a color image for a field of view and a polarimetric data of the field of view, a processor configured to perform a neural network function in response to the color image and the polarimetric data to generate a depth map of the field of view, and a vehicle controller configured a perform an advanced driving assistance function and to control a vehicle movement in response to the depth map.
Abstract:
A vehicle, system for operating a vehicle and method of navigating a vehicle. The system includes a sensor and a multi-layer convolutional neural network. The sensor generates an image indicative of a road scene of the vehicle. The multi-layer convolutional neural network generates a plurality of feature maps from the image via a first processing pathway, projects at least one of the plurality of feature maps onto a defined plane relative to a defined coordinate system of the road scene to obtain at least one projected feature map, applies a convolution to the at least one projected feature map in a second processing pathway to obtain a final feature map, and determines lane information from the final feature map. A control system adjusts operation of the vehicle using the lane information.
Abstract:
A user-centric driving-support system for implementation at a vehicle of transportation. The system in various embodiments includes one or more vehicle sensors, such as a camera, a RADAR, and a LiDAR, and a hardware-based processing unit. The system further includes a non-transitory computer-readable storage device including an activity unit and an output-structuring unit. The activity unit, when executed by the hardware-based processing unit, determines, based on contextual input information, at least one of an alert-assessment output and a scene-awareness output, wherein the contextual input information includes output of the vehicle sensor. The output-structuring unit, when executed by the hardware-based processing unit, determines an action to be performed at the vehicle based on at least one of the alert-assessment output and the scene-awareness output determined by the activity unit. The technology in various implementations includes the storage device, alone, and user-centric driving-support processes performed using the device and other vehicle components.