Abstract:
The present invention relates to annotating images. In an embodiment, the present invention enables users to create annotations corresponding to three-dimensional objects while viewing two-dimensional images. In one embodiment, this is achieved by projecting a selecting object onto a three-dimensional model created from a plurality of two-dimensional images. The selecting object is input by a user while viewing a first image corresponding to a portion of the three-dimensional model. A location corresponding to the projection on the three-dimensional model is determined, and content entered by the user while viewing the first image is associated with the location. The content is stored together with the location information to form an annotation. The annotation can be retrieved and displayed together with other images corresponding to the location.
Abstract:
The technology uses image content to facilitate navigation in panoramic image data. Aspects include providing a first image including a plurality of avatars, in which each avatar corresponds to an object within the first image, and determining an orientation of at least one of the plurality of avatars to a point of interest within the first image. A viewport is determined for a first avatar in accordance with the orientation thereof relative to the point of interest, which is included within the first avatar's viewport. In response to received user input, a second image is selected that includes at least a second avatar and the point of interest from the first image. A viewport of the second avatar in the second image is determined and the second image is oriented to align the second avatar's viewpoint with the point of interest to provide navigation between the first and second images.
Abstract:
Example methods and systems for detecting weather conditions using vehicle onboard sensors are provided. An example method includes receiving laser data collected for an environment of a vehicle, and the laser data includes a plurality of laser data points. The method also includes associating, by a computing device, laser data points of the plurality of laser data points with one or more objects in the environment, and determining given laser data points of the plurality of laser data points that are unassociated with the one or more objects in the environment as being representative of an untracked object. The method also includes based on one or more untracked objects being determined, identifying by the computing device an indication of a weather condition of the environment.
Abstract:
Aspects of the disclosure relate generally to detecting road weather conditions. Vehicle sensors including a laser, precipitation sensors, and/or camera may be used to detect information such as the brightness of the road, variations in the brightness of the road, brightness of the world, current precipitation, as well as the detected height of the road. Information received from other sources such as networked based weather information (forecasts, radar, precipitation reports, etc.) may also be considered. The combination of the received and detected information may be used to estimate the probability of precipitation such as water, snow or ice in the roadway. This information may then be used to maneuver an autonomous vehicle (for steering, accelerating, or braking) or identify dangerous situations.
Abstract:
Methods and systems for detecting weather conditions including fog using vehicle onboard sensors are provided. An example method includes receiving laser data collected from scans of an environment of a vehicle, and associating, by a computing device, laser data points of with one or more objects in the environment. The method also includes comparing laser data points that are unassociated with the one or more objects in the environment with stored laser data points representative of a pattern due to fog, and based on the comparison, identifying by the computing device an indication that a weather condition of the environment of the vehicle includes fog.
Abstract:
An autonomous vehicle configured to detect and avoid pedestrians may use information from LIDAR or other range-based sensors. An example method involves: (a) receiving, at a computing device, range data corresponding to a plurality of objects in an environment of a vehicle, wherein the range data comprises a plurality of first data points; (b) generating a spherical data set comprising a plurality of second data points, wherein spherical coordinates for each second data point are generated based on a corresponding one of the first plurality of data points; and (c) determining a two-dimensional map based on the spherical data set comprising the plurality of second data points, wherein the two-dimensional map comprises a plurality of pixels, wherein each pixel of the plurality of pixels is indicative of a plurality of parameters corresponding to the plurality of objects in the environment.
Abstract:
Aspects of the disclosure relate generally to maneuvering autonomous vehicles. Specifically, the vehicle may use a laser to collect scan data for a section of roadway. The vehicle may access a detailed map including the section of the roadway. A disturbance indicative of an object and including a set of data points data may be identified from the scan data based on the detailed map. The detailed map may also be used to estimate a heading of the disturbance. A bounding box for the disturbance may be estimated using the set of data points as well as the estimated heading. The parameters of the bounding box may then be adjusted in order to increase or maximize the average density of data points of the disturbance along the edges of the bounding box visible to the laser. This adjusted bounding box may then used to maneuver the vehicle.
Abstract:
A light detection and ranging device associated with an autonomous vehicle scans through a scanning zone while emitting light pulses and receives reflected signals corresponding to the light pulses. The reflected signals indicate a three-dimensional point map of the distribution of reflective points in the scanning zone. A hyperspectral sensor images a region of the scanning zone corresponding to a reflective feature indicated by the three-dimensional point map. The output from the hyperspectral sensor includes spectral information characterizing a spectral distribution of radiation received from the reflective feature. The spectral characteristics of the reflective feature allow for distinguishing solid objects from non-solid reflective features, and a map of solid objects is provided to inform real time navigation decisions.
Abstract:
Aspects of the disclosure relate generally to detecting road weather conditions. Vehicle sensors including a laser, precipitation sensors, and/or camera may be used to detect information such as the brightness of the road, variations in the brightness of the road, brightness of the world, current precipitation, as well as the detected height of the road. Information received from other sources such as networked based weather information (forecasts, radar, precipitation reports, etc.) may also be considered. The combination of the received and detected information may be used to estimate the probability of precipitation such as water, snow or ice in the roadway. This information may then be used to maneuver an autonomous vehicle (for steering, accelerating, or braking) or identify dangerous situations.
Abstract:
An example method may include receiving a first set of points based on detection of an environment of an autonomous vehicle during a first time period, selecting a plurality of points from the first set of points that form a first point cloud representing an object in the environment, receiving a second set of points based on detection of the environment during a second time period which is after the first period, selecting a plurality of points from the second set of points that form a second point cloud representing the object in the environment, determining a transformation between the selected points from the first set of points and the selected points from the second set of points, using the transformation to determine a velocity of the object, and providing instructions to control the autonomous vehicle based at least in part on the velocity of the object.