Abstract:
Methods and systems for object detection using multiple sensors are described herein. In an example embodiment, a vehicle's computing device may receive sensor data frames indicative of an environment at different rates from multiple sensors. Based on a first frame from a first sensor indicative of the environment at a first time period and a portion of a first frame that corresponds to the first time period from a second sensor, the computing device may estimate parameters of objects in the vehicle's environment. The computing device may modify the parameters in response to receiving subsequent frames or subsequent portions of frame of sensor data from the sensors even if the frames arrive at the computing device out of order. The computing device may provide the parameters of the objects to systems of the vehicle for object detection and obstacle avoidance.
Abstract:
Methods and systems for object detection using multiple sensors are described herein. In an example embodiment, a vehicle's computing device may receive sensor data frames indicative of an environment at different rates from multiple sensors. Based on a first frame from a first sensor indicative of the environment at a first time period and a portion of a first frame that corresponds to the first time period from a second sensor, the computing device may estimate parameters of objects in the vehicle's environment. The computing device may modify the parameters in response to receiving subsequent frames or subsequent portions of frame of sensor data from the sensors even if the frames arrive at the computing device out of order. The computing device may provide the parameters of the objects to systems of the vehicle for object detection and obstacle avoidance.
Abstract:
A computing device may identify an object in an environment of a vehicle and receive a first three-dimensional (3D) point cloud depicting a first view of the object. The computing device may determine a reference point on the object in the first 3D point cloud, and receive a second 3D point cloud depicting a second view of the object. The computing device may determine a transformation between the first view and the second view, and estimate a projection of the reference point from the first view relative to the second view based on the transformation so as to trace the reference point from the first view to the second view. The computing device may determine one or more motion characteristics of the object based on the projection of the reference point.
Abstract:
Methods and systems for object detection using multiple sensors are described herein. In an example embodiment, a vehicle's computing device may receive sensor data frames indicative of an environment at different rates from multiple sensors. Based on a first frame from a first sensor indicative of the environment at a first time period and a portion of a first frame that corresponds to the first time period from a second sensor, the computing device may estimate parameters of objects in the vehicle's environment. The computing device may modify the parameters in response to receiving subsequent frames or subsequent portions of frame of sensor data from the sensors even if the frames arrive at the computing device out of order. The computing device may provide the parameters of the objects to systems of the vehicle for object detection and obstacle avoidance.
Abstract:
A computing device may identify an object in an environment of a vehicle and receive a first three-dimensional (3D) point cloud depicting a first view of the object. The computing device may determine a reference point on the object in the first 3D point cloud, and receive a second 3D point cloud depicting a second view of the object. The computing device may determine a transformation between the first view and the second view, and estimate a projection of the reference point from the first view relative to the second view based on the transformation so as to trace the reference point from the first view to the second view. The computing device may determine one or more motion characteristics of the object based on the projection of the reference point.
Abstract:
An example method may include receiving a first set of points based on detection of an environment of an autonomous vehicle during a first time period, selecting a plurality of points from the first set of points that form a first point cloud representing an object in the environment, receiving a second set of points based on detection of the environment during a second time period which is after the first period, selecting a plurality of points from the second set of points that form a second point cloud representing the object in the environment, determining a transformation between the selected points from the first set of points and the selected points from the second set of points, using the transformation to determine a velocity of the object, and providing instructions to control the autonomous vehicle based at least in part on the velocity of the object.