Abstract:
Aspects of the disclosure relate generally to detecting road weather conditions. Vehicle sensors including a laser, precipitation sensors, and/or camera may be used to detect information such as the brightness of the road, variations in the brightness of the road, brightness of the world, current precipitation, as well as the detected height of the road. Information received from other sources such as networked based weather information (forecasts, radar, precipitation reports, etc.) may also be considered. The combination of the received and detected information may be used to estimate the probability of precipitation such as water, snow or ice in the roadway. This information may then be used to maneuver an autonomous vehicle (for steering, accelerating, or braking) or identify dangerous situations.
Abstract:
The present invention relates to using image content to facilitate navigation in panoramic image data. In an embodiment, a computer-implemented method for navigating in panoramic image data includes: (1) determining an intersection of a ray and a virtual model, wherein the ray extends from a camera viewport of an image and the virtual model comprises a plurality of facade planes; (2) retrieving a panoramic image; (3) orienting the panoramic image to the intersection; and (4) displaying the oriented panoramic image.
Abstract:
The present invention relates to annotating images. In an embodiment, the present invention enables users to create annotations corresponding to three-dimensional objects while viewing two-dimensional images. In one embodiment, this is achieved by projecting a selecting object onto a three-dimensional model created from a plurality of two-dimensional images. The selecting object is input by a user while viewing a first image corresponding to a portion of the three-dimensional model. A location corresponding to the projection on the three-dimensional model is determined, and content entered by the user while viewing the first image is associated with the location. The content is stored together with the location information to form an annotation. The annotation can be retrieved and displayed together with other images corresponding to the location.
Abstract:
A method and apparatus is provided for controlling the operation of an autonomous vehicle. According to one aspect, the autonomous vehicle may track the trajectories of other vehicles on a road. Based on the other vehicle's trajectories, the autonomous vehicle may generate a pool of combined trajectories. Subsequently, the autonomous vehicle may select one of the combined trajectories as a representative trajectory. The representative trajectory may be used to change at least one of the speed or direction of the autonomous vehicle.
Abstract:
A light detection and ranging device with dynamically adjustable angular resolution for use as a sensor providing environmental information for navigating an autonomous vehicle is disclosed. A first region of a scanning zone is scanned while emitting light pulses at a first pulse rate, and a second region of the scanning zone is scanned while emitting light pulses at a second pulse rate different from the first pulse rate. Information from the LIDAR device indicative of the time delays between the emission of the light pulses and the reception of the corresponding returning light pulses is received. A three dimensional point map is generated where the resolution of the point map in the first region is based on the first pulse rate and is based on the second pulse rate in the second region.
Abstract:
Methods and systems are disclosed for cross-validating a second sensor with a first sensor. Cross-validating the second sensor may include obtaining sensor readings from the first sensor and comparing the sensor readings from the first sensor with sensor readings obtained from the second sensor. In particular, the comparison of the sensor readings may include comparing state information about a vehicle detected by the first sensor and the second sensor. In addition, comparing the sensor readings may include obtaining a first image from the first sensor, obtaining a second image from the second sensor, and then comparing various characteristics of the images. One characteristic that may be compared are object labels applied to the vehicle detected by the first and second sensor. The first and second sensors may be different types of sensors.
Abstract:
A method is provided for processing an image in which only parts of the image that appear above a point on a horizon line are analyzed to identify an object. In one embodiment, the distance between the object and a vehicle is determined, and at least one of the speed and direction of the vehicle is changed when it is determined that the distance is less than the range of a sensor. The method for processing images is not limited to vehicular applications only and it may be used in all applications where computer vision is used to identify objects in an image.
Abstract:
Methods and systems for object detection using laser point clouds are described herein. In an example implementation, a computing device may receive laser data indicative of a vehicle's environment from a sensor and generate a two dimensional (2D) range image that includes pixels indicative of respective positions of objects in the environment based on the laser data. The computing device may modify the 2D range image to provide values to given pixels that map to portions of objects in the environment lacking laser data, which may involve providing values to the given pixels based on the average value of neighboring pixels positioned by the given pixels. Additionally, the computing device may determine normal vectors of sets of pixels that correspond to surfaces of objects in the environment based on the modified 2D range image and may use the normal vectors to provide object recognition information to systems of the vehicle.
Abstract:
Example methods and systems for detecting reflective markers at long range are provided. An example method includes receiving laser data collected from successive scans of an environment of a vehicle. The method also includes determining a respective size of the one or more objects based on the laser data collected from respective successive scans. The method may further include determining, by a computing device and based at least in part on the respective size of the one or more objects for the respective successive scans, an object that exhibits a change in size as a function of distance from the vehicle. The method may also include determining that the object is representative of a reflective marker. In one example, a computing device may use the detection of one reflective marker to help detect subsequent reflective markers that may be in a similar position.
Abstract:
Aspects of the disclosure relate generally to detecting discrete actions by traveling vehicles. The features described improve the safety, use, driver experience, and performance of autonomously controlled vehicles by performing a behavior analysis on mobile objects in the vicinity of an autonomous vehicle. Specifically, an autonomous vehicle is capable of detecting and tracking nearby vehicles and is able to determine when these nearby vehicles have performed actions of interest by comparing their tracked movements with map data.