Abstract:
Systems and methods for depth estimation of images from a mono-camera by use of radar data by: receiving, a plurality of input 2-D images from the mono-camera; generating, by the processing unit, an estimated depth image by supervised training of an image estimation model; generating, by the processing unit, a synthetic image from a first input image and a second input image from the mono-camera by applying an estimated transform pose; comparing, by the processing unit, an estimated three-dimensional (3-D) point cloud to radar data by applying another estimated transform pose to a 3-D point cloud wherein the 3-D point cloud is estimated from a depth image by supervised training of the image estimation model to radar distance and radar doppler measurement; correcting a depth estimation of the estimated depth image by losses derived from differences: of the synthetic image and original images; of an estimated depth image and a measured radar distance; and of an estimated doppler information and measured radar doppler information.
Abstract:
A method and system including a central processing unit (CPU), an accelerator, a communication bus and a system memory device for dynamically processing an image file are described. The accelerator includes a local memory buffer, a data transfer scheduler, and a plurality of processing engines. The data transfer scheduler is arranged to manage data transfer between the system memory device and the local memory buffer, wherein the data transfer includes data associated with the image file. The local memory buffer is configured as a circular line buffer, and the data transfer scheduler includes a ping-pong buffer for transferring output data from the one of the processing engines to the system memory device. The local memory buffer is configured to execute cross-layer usage of data associated with the image file.
Abstract:
A method of on-line diagnostic and prognostic assessment of an autonomous vehicle perception system includes detecting, via a sensor, a physical parameter of an object external to the vehicle. The method also includes communicating data representing the physical parameter via the sensor to an electronic controller. The method additionally includes comparing the data from the sensor to data representing the physical parameter generated by a geo-source model. The method also includes comparing results generated by a perception software during analysis of the data from the sensor to labels representing the physical parameter from the geo-source model. Furthermore, the method includes generating a prognostic assessment of a ground truth for the physical parameter of the object using the comparisons of the sensor data to the geo-source model data and of the software results to the geo-source model labels. A system for on-line assessment of the vehicle perception system is also disclosed.
Abstract:
Described herein are systems, methods, and computer-readable media for generating and training a high precision low bit convolutional neural network (CNN). A filter of each convolutional layer of the CNN is approximated using one or more binary filters and a real-valued activation function is approximated using a linear combination of binary activations. More specifically, a non-1×1 filter (e.g., a k×k filter, where k>1) is approximated using a scaled binary filter and a 1×1 filter is approximated using a linear combination of binary filters. Thus, a different strategy is employed for approximating different weights (e.g., 1×1 filter vs. a non-1×1 filter). In this manner, convolutions performed in convolutional layer(s) of the high precision low bit CNN become binary convolutions that yield a lower computational cost while still maintaining a high performance (e.g., a high accuracy).
Abstract:
An adaptive parallel imaging processing system in a vehicle is provided. The system may include, but is not limited to, a plurality of processors and a resource management system including, but not limited to, an execution monitor, the execution monitor configured to calculate an average utilization of each of the plurality of processors over a moving window, and a service scheduler controlling a request queue for each of the plurality of processors, the service scheduler scheduling image processing tasks in the respective request queue for the each of the plurality of processors based upon the average utilization of each of the plurality of processors, the capabilities of each of the plurality of processors, and a priority associated with each image processing task, wherein an autonomous vehicle control system is configured to generate the instructions to control the at least one vehicle system based upon the processed image processing tasks.
Abstract:
Technical solutions are described for controlling an automated driving system of a vehicle. An example method includes computing a complexity metric of an upcoming region along a route that the vehicle is traveling along. The method further includes, in response to the complexity metric being below a predetermined low-complexity threshold, determining a trajectory for the vehicle to travel in the upcoming region using a computing system of the vehicle. Further, the method includes in response to the complexity metric being above a predetermined high-complexity threshold, instructing an external computing system to determine the trajectory for the vehicle to travel in the upcoming region. If the trajectory cannot be determined by the external computing system a minimal risk condition maneuver of the vehicle is performed.
Abstract:
A vehicle, system and method of driving of an autonomous vehicle. The vehicle includes a camera for obtaining an image of a surrounding region of the vehicle, an actuation device for controlling a parameter of motion of the vehicle, and a processor. The processor selects a context region within the image, wherein the context region including a detection region therein. The processor further estimates a confidence level indicative of the presence of at least a portion of the target object in the detection region and a bounding box associated with the target object, determines a proposal region from the bounding box when the confidence level is greater than a selected threshold, determines a parameter of the target object within the proposal region, and controls the actuation device to alter a parameter of motion of the vehicle based on the parameter of the target object.
Abstract:
A system and method are provided for detecting remote vehicles relative to a host vehicle using wheel detection. The system and method include tracking wheel candidates based on wheel detection data received from a plurality of object detection devices, comparing select parameters relating to the wheel detection data for each of the tracked wheel candidates, and identifying a remote vehicle by determining if a threshold correlation exists between any of the tracked wheel candidates based on the comparison of select parameters.
Abstract:
A method and system are disclosed for tracking objects which are crossing behind a host vehicle. Target data from a vision system and two radar sensors are provided to an object detection fusion system. Salient points on the target object are identified and tracked using the vision system data. The salient vision points are associated with corresponding radar points, where the radar points provide Doppler radial velocity data. A fusion calculation is performed on the salient vision points and the radar points, yielding an accurate estimate of the velocity of the target object, including its lateral component which is difficult to obtain using radar points only or traditional vision system methods. The position and velocity of the target object are used to trigger warnings or automatic braking in a Rear Cross Traffic Avoidance (RCTA) system.
Abstract:
A system for responding to adversarial behavior within an autonomous vehicle includes a pedestrian detection system in communication with a vehicle controller and an adversarial intent algorithm adapted to determine a risk level of adversarial behavior from at least one pedestrian within proximity of the autonomous vehicle, the vehicle controller adapted to provide proactive and reactive actions to be performed by the autonomous vehicle in response to the adversarial behavior based on the risk level of the adversarial behavior.