Abstract:
Methods and systems for real-time road flare detection using templates and appropriate color spaces are described. A computing device of a vehicle may be configured to receive an image of an environment of the vehicle. The computing device may be configured to identify a given pixels in the plurality of pixels having one or more of: (i) a red color value greater than a green color value, and (ii) the red color value greater than a blue color value. Further, the computing device may be configured to make a comparison between one or more characteristics of a shape of an object represented by the given pixels in the image and corresponding one or more characteristics of a predetermined shape of a road flare; and determine a likelihood that the object represents the road flare.
Abstract:
Disclosed herein are systems and methods for providing supplemental identification abilities to an autonomous vehicle system. The sensor unit of the vehicle may be configured to receive data indicating an environment of the vehicle, while the control system may be configured to operate the vehicle. The vehicle may also include a processing unit configured to analyze the data indicating the environment to determine at least one object having a detection confidence below a threshold. Based on the at least one object having a detection confidence below a threshold, the processor may communicate at least a subset of the data indicating the environment for further processing. The vehicle is also configured to receive an indication of an object confirmation of the subset of the data. Based on the object confirmation of the subset of the data, the processor may alter the control of the vehicle by the control system.
Abstract:
Disclosed herein are methods and systems for determining a location of an object within an environment. An example method may include determining a three-dimensional (3D) location of a plurality of reference points in an environment, receiving a two-dimensional (2D) image of a portion of the environment that contains an object, selecting certain reference points from the plurality of reference points that form a polygon when projected into the 2D image that contains at least a portion of the object, determining an intersection point of a ray directed toward the object and a 3D polygon formed by the selected reference points, and based on the intersection point of the ray directed toward the object and the 3D polygon formed by the selected reference points, determining a 3D location of the object in the environment.
Abstract:
Methods and systems are provided that may allow an autonomous vehicle to discern a school bus from image data. An example method may include receiving image data indicative of a vehicles operating in an environment. The image data may depict sizes of the vehicles. The method may also include, based on relative sizes of the vehicles, determining a vehicle that is larger in size as compared the other vehicles. The method may additionally include comparing a size of the determined vehicle to a size of a school bus and based on the size of vehicle being within a threshold size of the school bus, comparing a color of the vehicle to a color of the school bus. The method may further include based on the vehicle being substantially the same color as the school bus, determining that the vehicle is representative of the school bus.
Abstract:
Methods and systems are provided that may allow an autonomous vehicle to discern a school bus from image data. An example method may include receiving image data indicative of a vehicles operating in an environment. The image data may depict sizes of the vehicles. The method may also include, based on relative sizes of the vehicles, determining a vehicle that is larger in size as compared the other vehicles. The method may additionally include comparing a size of the determined vehicle to a size of a school bus and based on the size of vehicle being within a threshold size of the school bus, comparing a color of the vehicle to a color of the school bus. The method may further include based on the vehicle being substantially the same color as the school bus, determining that the vehicle is representative of the school bus.
Abstract:
An autonomous vehicle may be configured to use environmental information for image processing. The vehicle may be configured to operate in an autonomous mode in an environment and may be operating substantially in a lane of travel of the environment. The vehicle may include a sensor configured to receive image data indicative of the environment. The vehicle may also include a computer system configured to compare environmental information indicative of the lane of travel to the image data so as to determine a portion of the image data that corresponds to the lane of travel of the environment. Based on the portion of the image data that corresponds to the lane of travel of the environment and by disregarding a remaining portion of the image data, the vehicle may determine whether an object is present in the lane, and based on the determination, provide instructions to control the vehicle in the autonomous mode in the environment.