Abstract:
Perception sensors of a vehicle can be used for various operating functions of the vehicle. A computing device may receive sensor data from the perception sensors, and may calibrate the perception sensors using the sensor data, to enable effective operation of the vehicle. To calibrate the sensors, the computing device may project the sensor data into a voxel space, and determine a voxel score comprising an occupancy score and a residual value for each voxel. The computing device may then adjust an estimated position and/or orientation of the sensors, and associated sensor data, from at least one perception sensor to minimize the voxel score. The computing device may calibrate the sensor using the adjustments corresponding to the minimized voxel score. Additionally, the computing device may be configured to calculate an error in a position associated with the vehicle by calibrating data corresponding to a same point captured at different times.
Abstract:
Techniques for generating maps without shadows are discussed herein. A plurality of images can be captured by a vehicle traversing an environment representing various perspectives and/or lighting conditions in the environment. A shadow within an image can be identified by a machine learning algorithm trained to detect shadows in images and/or by projecting the image onto a three-dimensional (3D) map of the environment and identifying candidate shadow regions based on the geometry of the 3D map and the location of the light source. Shadows can be removed or minimized by utilizing blending or duplicating techniques. Color information and reflectance information can be added to the 3D map to generate a textured 3D map. A textured 3D map without shadows can be used to simulate the environment under different lighting conditions.
Abstract:
A method for operating a driverless vehicle may include receiving, at the driverless vehicle, sensor signals related to operation of the driverless vehicle, and road network data from a road network data store. The method may also include determining a driving corridor within which the driverless vehicle travels according to a trajectory, and causing the driverless vehicle to traverse a road network autonomously according to a path from a first geographic location to a second geographic location. The method may also include determining that an event associated with the path has occurred, and sending communication signals to a teleoperations system including a request for guidance and one or more of sensor data and the road network data. The method may include receiving, at the driverless vehicle, teleoperations signals from the teleoperations system, such that the vehicle controller determines a revised trajectory based at least in part on the teleoperations signals.
Abstract:
An autonomous delivery vehicle including locking storage containers may be used for item deliveries, rejections, returns, and/or third-party fulfillment. A delivery vehicle or robot may include a number of locking storage containers, an authorization interface, and one or more sensors to receive delivery requests, detect and authorize users, and control locker access at various delivery locations to allow users to receive delivered items, and reject or return items. The vehicle may also include a passenger compartment to transport one or more passengers. The vehicle may be reconfigurable to accommodate different combinations of lockers and/or passenger seats. An item delivery system may receive delivery requests and determine routes for delivery vehicles, including centralized delivery locations and/or direct deliveries to recipients.
Abstract:
Systems, methods and apparatus may be configured to implement automatic semantic classification of a detected object(s) disposed in a region of an environment external to an autonomous vehicle. The automatic semantic classification may include analyzing over a time period, patterns in a predicted behavior of the detected object(s) to infer a semantic classification of the detected object(s). Analysis may include processing of sensor data from the autonomous vehicle to generate heat maps indicative of a location of the detected object(s) in the region during the time period. Probabilistic statistical analysis may be applied to the sensor data to determine a confidence level in the inferred semantic classification. The inferred semantic classification may be applied to the detected object(s) when the confidence level exceeds a predetermined threshold value (e.g., greater than 50%).
Abstract:
A method and system of determining whether a stationary vehicle is a blocking vehicle to improve control of an autonomous vehicle. A perception engine may detect a stationary vehicle in an environment of the autonomous vehicle from sensor data received by the autonomous vehicle. Responsive to this detection, the perception engine may determine feature values of the environment of the vehicle from sensor data (e.g., features of the stationary vehicle, other object(s), the environment itself). The autonomous vehicle may input these feature values into a machine-learning model to determine a probability that the stationary vehicle is a blocking vehicle and use the probability to generate a trajectory to control motion of the autonomous vehicle.
Abstract:
Systems, methods, and apparatuses described herein are directed to performing segmentation on voxels representing three-dimensional data to identify static and dynamic objects. LIDAR data may be captured by a perception system for an autonomous vehicle and represented in a voxel space. Operations may include determining a drivable surface by parsing individual voxels to determine an orientation of a surface normal of a planar approximation of the voxelized data relative to a reference direction. Clustering techniques can be used to grow a ground plane including a plurality of locally flat voxels. Ground plane data can be set aside from the voxel space, and the remaining voxels can be clustered to determine objects. Voxel data can be analyzed over time to determine dynamic objects. Segmentation information associated with ground voxels, static object, and dynamic objects can be provided to a tracker and/or planner in conjunction with operating the autonomous vehicle.
Abstract:
A system, an apparatus or a process may be configured to implement an application that applies artificial intelligence and/or machine-learning techniques to predict an optimal course of action (or a subset of courses of action) for an autonomous vehicle system (e.g., one or more of a planner of an autonomous vehicle, a simulator, or a teleoperator) to undertake based on suboptimal autonomous vehicle performance and/or changes in detected sensor data (e.g., new buildings, landmarks, potholes, etc.). The application may determine a subset of trajectories based on a number of decisions and interactions when resolving an anomaly due to an event or condition. The application may use aggregated sensor data from multiple autonomous vehicles to assist in identifying events or conditions that might affect travel (e.g., using semantic scene classification). An optimal subset of trajectories may be formed based on recommendations responsive to semantic changes (e.g., road construction).