Abstract:
The present invention relates to using image content to facilitate navigation in panoramic image data. In an embodiment, a computer-implemented method for navigating in panoramic image data includes: (1) determining an intersection of a ray and a virtual model, wherein the ray extends from a camera viewport of an image and the virtual model comprises a plurality of facade planes; (2) retrieving a panoramic image; (3) orienting the panoramic image to the intersection; and (4) displaying the oriented panoramic image.
Abstract:
Aspects of the disclosure relate generally to detecting the edges of lane lines. Specifically, a vehicle driving on a roadway may use a laser to collect data for the roadway. A computer may process the data received from the laser in order to extract the points which potentially reside on two lane lines defining a lane. The extracted points are used by the computer to determine a model of a left lane edge and a right lane edge for the lane. The model may be used to estimate a centerline between the two lane lines. All or some of the model and centerline estimates, may be used to maneuver a vehicle in real time and also to update or generate map information used to maneuver vehicles.
Abstract:
Aspects of the invention relate generally to autonomous vehicles. Specifically, the features described may be used alone or in combination in order to improve the safety, use, driver experience, and performance of these vehicles.
Abstract:
Methods and apparatus are disclosed related to autonomous vehicle applications for selecting destinations. A control system of an autonomous vehicle can determine a status of the autonomous vehicle. The control system can determine a possible destination of the autonomous vehicle. The control system can generate and provide a hint related to the possible destination based on the status of the autonomous vehicle. The control system can receive input related to the hint. Based on the input, the control system can determine whether to navigate the autonomous vehicle to the possible destination. After determining to navigate the autonomous vehicle to the possible destination, the control system can direct the autonomous vehicle to travel to the possible destination.
Abstract:
A vehicle configured to operate in an autonomous mode may engage in an obstacle evaluation technique that includes employing a sensor system to collect data relating to a plurality of obstacles, identifying from the plurality of obstacles an obstacle pair including a first obstacle and a second obstacle, engaging in an evaluation process by comparing the data collected for the first obstacle to the data collected for the second obstacle, and in response to engaging in the evaluation process, making a determination of whether the first obstacle and the second obstacle are two separate obstacles.
Abstract:
Aspects of the invention relate generally to autonomous vehicles. Specifically, the features described may be used alone or in combination in order to improve the safety, use, driver experience, and performance of these vehicles.
Abstract:
Aspects of the disclosure relate generally to detecting road weather conditions. Vehicle sensors including a laser, precipitation sensors, and/or camera may be used to detect information such as the brightness of the road, variations in the brightness of the road, brightness of the world, current precipitation, as well as the detected height of the road. Information received from other sources such as networked based weather information (forecasts, radar, precipitation reports, etc.) may also be considered. The combination of the received and detected information may be used to estimate the probability of precipitation such as water, snow or ice in the roadway. This information may then be used to maneuver an autonomous vehicle (for steering, accelerating, or braking) or identify dangerous situations.
Abstract:
Methods and systems are disclosed for cross-validating a second sensor with a first sensor. Cross-validating the second sensor may include obtaining sensor readings from the first sensor and comparing the sensor readings from the first sensor with sensor readings obtained from the second sensor. In particular, the comparison of the sensor readings may include comparing state information about a vehicle detected by the first sensor and the second sensor. In addition, comparing the sensor readings may include obtaining a first image from the first sensor, obtaining a second image from the second sensor, and then comparing various characteristics of the images. One characteristic that may be compared are object labels applied to the vehicle detected by the first and second sensor. The first and second sensors may be different types of sensors.
Abstract:
Aspects of the disclosure relate generally to methods and systems for improving object detection and classification. An example system may include a perception system and a feedback system. The perception system may be configured to receive data indicative of a surrounding environment of a vehicle, and to classify one or more portions of the data as representative of a type of object based on parameters associated with a machine learning classifier. The feedback system may be configured to request feedback regarding a classification of an object by the perception system based on a confidence level associated with the classification being below a threshold, and to cause the parameters associated with the machine classifier to be modified based on information provided in response to the request.
Abstract:
Methods and systems are disclosed for determining sensor degradation by actively controlling an autonomous vehicle. Determining sensor degradation may include obtaining sensor readings from a sensor of an autonomous vehicle, and determining baseline state information from the obtained sensor readings. A movement characteristic of the autonomous vehicle, such as speed or position, may then be changed. The sensor may then obtain additional sensor readings, and second state information may be determined from these additional sensor readings. Expected state information may be determined from the baseline state information and the change in the movement characteristic of the autonomous vehicle. A comparison of the expected state information and the second state information may then be performed. Based on this comparison, a determination may be made as to whether the sensor has degraded.