Abstract:
Methods and systems for detecting hand signals of a cyclist by an autonomous vehicle are described. An example method may involve a computing device receiving a plurality of data points corresponding to an environment of an autonomous vehicle. The computing device may then determine one or more subsets of data points from the plurality of data points indicative of at least a body region of a cyclist. Further, based on an output of a comparison of the one or more subsets with one or more predetermined sets of cycling signals, the computing device may determine an expected adjustment of one or more of a speed of the cyclist and a direction of movement of the cyclist. Still further, based on the expected adjustment, the computing device may provide instructions to adjust one or more of a speed of the autonomous vehicle and a direction of movement of the autonomous vehicle.
Abstract:
Example methods and systems for detecting weather conditions using vehicle onboard sensors are provided. An example method includes receiving laser data collected for an environment of a vehicle, and the laser data includes a plurality of laser data points. The method also includes associating, by a computing device, laser data points of the plurality of laser data points with one or more objects in the environment, and determining given laser data points of the plurality of laser data points that are unassociated with the one or more objects in the environment as being representative of an untracked object. The method also includes based on one or more untracked objects being determined, identifying by the computing device an indication of a weather condition of the environment.
Abstract:
A computing device may identify an object in an environment of a vehicle and receive a first three-dimensional (3D) point cloud depicting a first view of the object. The computing device may determine a reference point on the object in the first 3D point cloud, and receive a second 3D point cloud depicting a second view of the object. The computing device may determine a transformation between the first view and the second view, and estimate a projection of the reference point from the first view relative to the second view based on the transformation so as to trace the reference point from the first view to the second view. The computing device may determine one or more motion characteristics of the object based on the projection of the reference point.
Abstract:
A computing device may be configured to receive sensor information indicative of respective characteristics of vehicles on a road of travel of a first vehicle. The computing device may be configured to identify, based on the respective characteristics, a second vehicle that exhibits an aggressive driving behavior manifested as an unsafe or unlawful driving action. Also, based on the respective characteristics, the computing device may be configured to determine a type of the second vehicle. The computing device may be configured to estimate a distance between the first vehicle and the second vehicle. The computing device may be configured to modify a control strategy of the first vehicle, based on the aggressive driving behavior of the second vehicle, the type of the second vehicle, and the distance between the first vehicle and the second vehicle; and control the first vehicle based on the modified control strategy.
Abstract:
Methods and systems for object detection using multiple sensors are described herein. In an example embodiment, a vehicle's computing device may receive sensor data frames indicative of an environment at different rates from multiple sensors. Based on a first frame from a first sensor indicative of the environment at a first time period and a portion of a first frame that corresponds to the first time period from a second sensor, the computing device may estimate parameters of objects in the vehicle's environment. The computing device may modify the parameters in response to receiving subsequent frames or subsequent portions of frame of sensor data from the sensors even if the frames arrive at the computing device out of order. The computing device may provide the parameters of the objects to systems of the vehicle for object detection and obstacle avoidance.
Abstract:
Methods and systems are disclosed for determining sensor degradation by actively controlling an autonomous vehicle. Determining sensor degradation may include obtaining sensor readings from a sensor of an autonomous vehicle, and determining baseline state information from the obtained sensor readings. A movement characteristic of the autonomous vehicle, such as speed or position, may then be changed. The sensor may then obtain additional sensor readings, and second state information may be determined from these additional sensor readings. Expected state information may be determined from the baseline state information and the change in the movement characteristic of the autonomous vehicle. A comparison of the expected state information and the second state information may then be performed. Based on this comparison, a determination may be made as to whether the sensor has degraded.
Abstract:
The present invention relates to annotating images. In an embodiment, the present invention enables users to create annotations corresponding to three-dimensional objects while viewing two-dimensional images. In one embodiment, this is achieved by projecting a selecting object onto a three-dimensional model created from a plurality of two-dimensional images. The selecting object is input by a user while viewing a first image corresponding to a portion of the three-dimensional model. A location corresponding to the projection on the three-dimensional model is determined, and content entered by the user while viewing the first image is associated with the location. The content is stored together with the location information to form an annotation. The annotation can be retrieved and displayed together with other images corresponding to the location.
Abstract:
A vehicle configured to operate in an autonomous mode may engage in an obstacle evaluation technique that includes employing a sensor system to collect data relating to a plurality of obstacles, identifying from the plurality of obstacles an obstacle pair including a first obstacle and a second obstacle, engaging in an evaluation process by comparing the data collected for the first obstacle to the data collected for the second obstacle, and in response to engaging in the evaluation process, making a determination of whether the first obstacle and the second obstacle are two separate obstacles.
Abstract:
Methods and systems for object detection using laser point clouds are described herein. In an example implementation, a computing device may receive laser data indicative of a vehicle's environment from a sensor and generate a two dimensional (2D) range image that includes pixels indicative of respective positions of objects in the environment based on the laser data. The computing device may modify the 2D range image to provide values to given pixels that map to portions of objects in the environment lacking laser data, which may involve providing values to the given pixels based on the average value of neighboring pixels positioned by the given pixels. Additionally, the computing device may determine normal vectors of sets of pixels that correspond to surfaces of objects in the environment based on the modified 2D range image and may use the normal vectors to provide object recognition information to systems of the vehicle.
Abstract:
A computing device may identify an object in an environment of a vehicle and receive a first three-dimensional (3D) point cloud depicting a first view of the object. The computing device may determine a reference point on the object in the first 3D point cloud, and receive a second 3D point cloud depicting a second view of the object. The computing device may determine a transformation between the first view and the second view, and estimate a projection of the reference point from the first view relative to the second view based on the transformation so as to trace the reference point from the first view to the second view. The computing device may determine one or more motion characteristics of the object based on the projection of the reference point.