Abstract:
A vehicle configured to operate in an autonomous mode could determine a current state of the vehicle and the current state of the environment of the vehicle. The environment of the vehicle includes at least one other vehicle. A predicted behavior of the at least one other vehicle could be determined based on the current state of the vehicle and the current state of the environment of the vehicle. A confidence level could also be determined based on the predicted behavior, the current state of the vehicle, and the current state of the environment of the vehicle. In some embodiments, the confidence level may be related to the likelihood of the at least one other vehicle to perform the predicted behavior. The vehicle in the autonomous mode could be controlled based on the predicted behavior, the confidence level, and the current state of the vehicle and its environment.
Abstract:
Disclosed herein are systems and methods for providing supplemental identification abilities to an autonomous vehicle system. The sensor unit of the vehicle may be configured to receive data indicating an environment of the vehicle, while the control system may be configured to operate the vehicle. The vehicle may also include a processing unit configured to analyze the data indicating the environment to determine at least one object having a detection confidence below a threshold. Based on the at least one object having a detection confidence below a threshold, the processor may communicate at least a subset of the data indicating the environment for further processing. The vehicle is also configured to receive an indication of an object confirmation of the subset of the data. Based on the object confirmation of the subset of the data, the processor may alter the control of the vehicle by the control system.
Abstract:
Aspects of the disclosure relate to classifying the status of objects. For examples, one or more computing devices detect an object from an image of a vehicle's environment. The object is associated with a location. The one or more computing devices receive data corresponding to the surfaces of objects in the vehicle's environment and identifying data within a region around the location of the object. The one or more computing devices also determine whether the data within the region corresponds to a planar surface extending away from an edge of the object. Based on this determination, the one or more computing devices classify the status of the object.
Abstract:
Aspects of the disclosure relate to detecting and responding to stop signs. An object detected in a vehicle's environment having location coordinates may be identified as a stop sign and, it may be determined whether the location coordinates of the identified stop sign correspond to a location of a stop sign in detailed map information. Then, whether the identified stop sign applies to the vehicle may be determined based on the detailed map information or on a number of factors. Then, if the identified stop sign is determined to apply to the vehicle, responses of the vehicle to the stop sign may be determined, and, the vehicle may be controlled based on the determined responses.
Abstract:
The present disclosure is directed to an autonomous vehicle having a vehicle control system. The vehicle control system includes an image processing system. The image processing system receives an image that includes a light indicator. The light indicator includes an illuminated component. The image processing system determines a color of the illuminated component of the light indicator and an associated confidence level of the determination of the color of the illuminated component. The image processing system also determines a shape of the illuminated component of the light indicator and an associated confidence level of the determination of the shape of the illuminated component. The determined confidence levels represent an estimated accuracy of the determinations of the shape and color. Additionally, the image processing system provides instructions executable by a computing device to control the autonomous vehicle based on at least one of the determined confidence levels exceeding a threshold value.
Abstract:
A vehicle is provided that may distinguish between dynamic obstacles and static obstacles. Given a detector for a class of static obstacles or objects, the vehicle may receive sensor data indicative of an environment of the vehicle. When a possible object is detected in a single frame, a location of the object and a time of observation of the object may be compared to previous observations. Based on the object being observed a threshold number of times, in substantially the same location, and within some window of time, the vehicle may accurately detect the presence of the object and reduce any false detections.
Abstract:
An autonomous vehicle is configured to detect an active turn signal indicator on another vehicle. An image-capture device of the autonomous vehicle captures an image of a field of view of the autonomous vehicle. The autonomous vehicle captures the image with a short exposure to emphasize objects having brightness above a threshold. Additionally, a bounding area for a second vehicle located within the image is determined. The autonomous vehicle identifies a group of pixels within the bounding area based on a first color of the group of pixels. The autonomous vehicle also calculates an oscillation of an intensity of the group of pixels. Based on the oscillation of the intensity, the autonomous vehicle determines a likelihood that the second vehicle has a first active turn signal. Additionally, the autonomous vehicle is controlled based at least on the likelihood that the second vehicle has a first active turn signal.
Abstract:
Disclosed herein are methods and apparatus for controlling autonomous vehicles utilizing maps that include visibility information. A map is stored at a computing device associated with a vehicle. The vehicle is configured to operate in an autonomous mode that supports a plurality of driving behaviors. The map includes information about a plurality of roads, a plurality of features, and visibility information for at least a first feature in the plurality of features. The computing device queries the map for visibility information for the first feature at a first position. The computing device, in response to querying the map, receives the visibility information for the first feature at the first position. The computing device selects a driving behavior for the vehicle based on the visibility information. The computing device controls the vehicle in accordance with the selected driving behavior.
Abstract:
A vehicle is provided that may combine multiple estimates of an environment into a consolidated estimate. The vehicle may receive first data indicative of the region of interest in an environment from a sensor of the vehicle. The first data may include a first accuracy value and a first estimate of the region of interest. The vehicle may also receive second data indicative of the region of interest in the environment, and the second data may include a second accuracy value and a second estimate of the region of interest. Based on the first data and the second data, the vehicle may combine the first estimate of the region of interest and the second estimate of the region of interest.
Abstract:
Aspects of the disclosure relate controlling autonomous vehicles or vehicles having an autonomous driving mode. More particularly, these vehicles may identify and respond to other vehicles engaged in a parallel parking maneuver by receiving sensor data corresponding to objects in an autonomous vehicle's environment and including location information for the objects over time. An object corresponding to another vehicle in a lane in front of the first vehicle may be identified from the sensor data. A pattern of actions of the other vehicle identified form the sensor data is used to determine that the second vehicle is engaged in a parallel parking maneuver based on a pattern of actions exhibited by the other vehicle identified from the sensor data. The determination is then used to control the autonomous vehicle.