Abstract:
A system and method for registering range images from objects detected by multiple LiDAR sensors on a vehicle. The method includes aligning frames of data from at least two LiDAR sensors having over-lapping field-of-views in a sensor signal fusion operation so as to track objects detected by the sensors. The method defines a transformation value for at least one of the LiDAR sensors that identifies an orientation angle and position of the sensor and provides target scan points from the objects detected by the sensors where the target scan points for each sensor provide a separate target point map. The method projects the target point map from the at least one sensor to another one of the LiDAR sensors using a current transformation value to overlap the target scan points from the sensors.
Abstract:
A system for resolving discrepancies in map data includes one or more central computers in wireless communication with one or more vehicles. The one or more central computers are programmed to receive a first map dataset and a second map dataset. The one or more central computers are further programmed to receive a plurality of crowdsourced map datasets. Each of the plurality of crowdsourced map datasets represents the predefined geographical area. The one or more central computers are further programmed to compare each of the plurality of crowdsourced map datasets with the first map dataset and the second map dataset to determine one or more common lane lines. The one or more central computers are further programmed to determine a fused map dataset based on the first map dataset, the second map dataset, the plurality of crowdsourced map datasets, and the one or more common lane lines.
Abstract:
A system for fusing two or more versions of map data together includes one or more central computers that receive road network data representing a road network for a predefined geofenced area. The central computers receive road network data that includes a discrete random curve that represents lane markings. The discrete random curve includes a plurality of state vectors that are each defined by a respective location and tangent angle. The central computers estimate the position for the state vectors of the discrete random curve based on a signed distance and the tangent angle by minimizing a spatial Kalman filter cost function and execute a Kalman smoothening function to estimate the position and the tangent angle for the state vectors that are part of the discrete random curve, where the state vectors each represent a map point of the fused map data.
Abstract:
A method for merging lane line maps includes determining a road topology of a road segment. The method also includes identifying a plurality of fused points based at least in part on the road topology and based at least in part on a first lane line map of the road segment and a second lane line map of the road segment. The method also includes forming a fused lane line map based at least in part on the plurality of fused points. The method also includes performing a first action based at least in part on the fused lane line map.
Abstract:
A method includes receiving sensed vehicle-state data, actuation-command data, and surface-coefficient data from a plurality of remote vehicles, inputting the sensed vehicle-state data, the actuation-command data, and the surface-coefficient data into a self-supervised recurrent neural network (RNN) to predict vehicle states of a host vehicle in a plurality of driving scenarios, and commanding the host vehicle to move autonomously according to a trajectory determined using the vehicle states predicted using the self-supervised RNN.
Abstract:
A method includes receiving sensed vehicle-state data, actuation-command data, and surface-coefficient data from a plurality of remote vehicles, inputting the sensed vehicle-state data, the actuation-command data, and the surface-coefficient data into a self-supervised recurrent neural network (RNN) to predict vehicle states of a host vehicle in a plurality of driving scenarios, and commanding the host vehicle to move autonomously according to a trajectory determined using the vehicle states predicted using the self-supervised RNN.
Abstract:
An automobile vehicle continuous validation system includes a backend collecting data from a vehicle fleet and wirelessly communicating with the vehicle fleet. The backend is in wireless communication with at least one client. A vehicle module is provided on-board individual ones of multiple automobile vehicles of the vehicle fleet and performing an on-board vehicle validation analysis. A fleet-based validation module provided either at the backend or cloud based manages data defining a configuration of and a capability of the multiple automobile vehicles of the vehicle fleet. A validation manager generates validation tasks based on a user's definition or a desired production of the validation tasks of the validation analysis and a fleet vehicle availability. A client-side module remote from the multiple automobile vehicles of the vehicle fleet has interface items applied by the at least one client seeking to perform the validation analysis.
Abstract:
A method for validating an autonomous vehicle performance using nearby traffic patterns includes receiving remote vehicle data. The remote vehicle data includes at least one remote-vehicle motion parameter about a movement of a plurality of remote vehicles during a predetermined time interval. The method further includes determining a traffic pattern of the plurality of remote vehicles using the at least one remote-vehicle motion parameter. The method includes determining a similarity between the traffic pattern of the plurality of remote vehicles and movements of the host vehicle. Further, the method includes determining whether the similarity between the traffic pattern of the plurality of remote vehicles and movements of the host vehicle is less than a predetermined threshold. Also, the method includes commanding the host vehicle to adjust the movements thereof to match the traffic pattern of the plurality of remote vehicles.
Abstract:
A perception processing system includes a memory and a main controller. The main controller includes modules and implements a data processing pipeline including algorithm stages, which are executed in parallel relative to sets of data and are executed sequentially relative to each of the sets of data. The algorithm stages share resources of the modules and the memory to process the sets of data and generate perception information. One of the modules executes global and local controllers. The global controller sets a processing rate for the local controllers. The local controllers monitor current processing rates of the algorithm stages. When one of the current processing rates is less than the set processing rate, the corresponding one of the local controllers sends a first signal to the global controller and in response the global controller sends a broadcast signal to the local controllers to adjust the current processing rates.
Abstract:
Methods and systems to implement sensor fusion to determine collision potential for a vehicle include identifying a specific intersection that the vehicle is approaching, and identifying collision potential scenarios associated with one or more paths through the specific intersection. Each collision potential scenario defines a risk of a collision between the vehicle and an object in a specified area. A weight with which one or more information sources of the vehicle are considered is adjusted for each collision potential scenario such that a highest weight is given to one or more of the one or more information sources that provide most relevant and reliable information about the specified area. Sensor fusion is implemented based on the adjusting the weight of the one or more information sources and performing detection based on the sensor fusion, and an alert is provided or actions are implemented according to the detection.