Abstract:
An advanced driver assistance system (ADAS) and method for a vehicle utilize a light detection and ranging (LIDAR) system configured to emit laser light pulses and capture reflected laser light pulses collectively forming three-dimensional (3D) LIDAR point cloud data and a controller configured to receive the 3D LIDAR point cloud data, convert the 3D LIDAR point cloud data to a two-dimensional (2D) birdview projection, detect a set of lines in the 2D birdview projection, filter the detected set of lines to remove lines having features that are not indicative of traffic signs to obtain a filtered set of lines, and detect one or more traffic signs using the filtered set of lines.
Abstract:
An embodiment provides a method for compression of a real-time surveillance signal. This method includes receiving a signal from a monitoring device and analyzing the signal to be monitored to compute spectral content of the signal. This method also includes computing the information content of the signal and determining a count of a number of coefficients to be used to monitor the signal. This method includes deploying a strategy for computing a plurality of coefficients based on the spectral content of the signal and the count of the number of coefficients to be used for monitoring the signal. This method further includes monitoring the signal and resetting the system in the case of above-threshold changes in a selected portion of the plurality of coefficients.
Abstract:
Vehicle perception techniques include obtaining a training dataset represented by N training histograms, in an image feature space, corresponding to N training images, K-means clustering the N training histograms to determine K clusters with respective K respective cluster centers, wherein K and N are integers greater than or equal to one and K is less than or equal to N, comparing the N training histograms to their respective K cluster centers to determine maximum in-class distances for each of K clusters, applying a deep neural network (DNN) to input images of the set of inputs to output detected/classified objects with respective confidence scores, obtaining adjusted confidence scores by adjusting the confidence scores output by the DNN based on distance ratios of (i) minimal distances of input histograms representing the input images to the K cluster centers and (ii) the respective maximum in-class.
Abstract:
An autonomous driving technique comprises determining an image quality metric for each image frame of a series of image frames of a scene outside of a vehicle captured by a camera system and determining an image quality threshold based on the image quality metrics for the series of image frames. The technique then determines whether the image quality metric for a current image frame satisfies the image quality threshold. When the image quality metric for the current image frame satisfies the image quality threshold, object detection is performed by at least utilizing a first deep neural network (DNN) with the current image frame. When the image quality metric for the current image frame fails to satisfy the image quality threshold, object detection is performed by utilizing a second, different DNN with the information captured by another sensor system and without utilizing the first DNN or the current image frame.
Abstract:
An advanced driver assistance system (ADAS) and method for a vehicle utilize a light detection and ranging (LIDAR) system configured to emit laser light pulses and capture reflected laser light pulses collectively forming three-dimensional (3D) LIDAR point cloud data and a controller configured to receive the 3D LIDAR point cloud data, convert the 3D LIDAR point cloud data to a two-dimensional (2D) birdview projection, obtain a template image for object detection, the template image being representative of a specific object, blur the 2D birdview projection and the template image to obtain a blurred 2D birdview projection and a blurred template image, and detect the specific object by matching a portion of the blurred 2D birdview projection to the blurred template image.
Abstract:
An advanced driver assistance system (ADAS) and method for a vehicle utilize a light detection and ranging (LIDAR) system configured to emit laser light pulses and capture reflected laser light pulses collectively forming three-dimensional (3D) LIDAR point cloud data and a controller configured to receive the 3D LIDAR point cloud data divide the 3D LIDAR point cloud data into a plurality of cells corresponding to distinct regions surrounding the vehicle, generate a histogram comprising a calculated height difference between a maximum height and a minimum height in the 3D LIDAR point cloud data for each cell of the plurality of cells, and using the histogram, perform at least one of adaptive ground removal from the 3D LIDAR point cloud data and traffic level recognition.
Abstract:
Vehicle perception techniques include applying a 3D DNN to a set of inputs to generate 3D detection results including a set of 3D objects, transforming the set of 3D objects onto a set of images as a first set of 2D bounding boxes, applying a 2D DNN to the set of images to generate 2D detection results including a second set of 2D bounding boxes, calculating mean average precision (mAP) values based on a comparison between the first and second sets of 2D bounding boxes, identifying a set or corner cases based on the calculated mAP values, and re-training or updating the 3D DNN using the identified set of corner cases, wherein a performance of the 3D DNN is thereby increased without the use of expensive additional manually and/or automatically annotated training datasets.
Abstract:
Vehicle center of gravity (CoG) height and mass estimation techniques utilize a light detection and ranging (LIDAR) sensor configured to emit light pulses and capture reflected light pulses that collectively form LIDAR point cloud data and a controller configured to estimate the CoG height and the mass of the vehicle during a steady-state operating condition of the vehicle by processing the LIDAR point cloud data to identify a ground plane, identifying a height difference between (i) a nominal distance from the LIDAR sensor to the ground plane and (ii) an estimated distance from the LIDAR sensor to the ground plane using the processed LIDAR point cloud data, estimating the vehicle CoG height as a difference between (i) a nominal vehicle CoG height and the height difference, and estimating the vehicle mass based on one of (i) vehicle CoG metrics and (ii) dampening metrics of a suspension of the vehicle.
Abstract:
An advanced driver assistance system (ADAS) and corresponding method for a vehicle utilize a camera system configured to capture an image and a controller configured to receive the captured image, detect an object in the captured image using a simple neural network model, track the detected object using a tracking technique to obtain a tracked position, project a trajectory of the detected object using a trajectory projection technique to obtain a predicted position, determine a most likely position of the detected object based on at least one of the tracked and predicted positions, generate a two-dimensional (2D) birdview projection illustrating the detected object according to its determined most likely position, and control at least one ADAS feature of the vehicle using the generated 2D birdview projection.
Abstract:
An advanced driver assistance system (ADAS) and method for a vehicle utilize a light detection and ranging (LIDAR) system configured to emit laser light pulses and capture reflected laser light pulses collectively forming three-dimensional (3D) LIDAR point cloud data and a controller configured to receive the 3D LIDAR point cloud data divide the 3D LIDAR point cloud data into a plurality of cells corresponding to distinct regions surrounding the vehicle, generate a histogram comprising a calculated height difference between a maximum height and a minimum height in the 3D LIDAR point cloud data for each cell of the plurality of cells, and using the histogram, perform at least one of adaptive ground removal from the 3D LIDAR point cloud data and traffic level recognition.