Abstract:
Systems and methods are provided for vehicle navigation. The systems and methods may detect traffic lights. For example, one or more traffic lights may be detected using detection-redundant camera detection paths, a fusion of information from a traffic light transmitter and one or more cameras, based on contrast enhancement for night images, and based on low resolution traffic light candidate identification followed by high resolution candidate analysis. Additionally, the systems and methods may navigation based on a worst time to red estimation.
Abstract:
A system for navigating an autonomous vehicle along a road segment is disclosed. The system may have at least one processor. The processor may be programmed to receive from an image capture device, images representative of an environment of the autonomous vehicle. The processor may also be programmed to determine a travelled trajectory along the road segment based on analysis of the images. Further, the processor may be programmed to determine a current location of the autonomous vehicle along a predetermined road model trajectory based on analysis of one or more of the plurality of images. The processor may also be programmed to determine a heading direction based on the determined traveled trajectory. In addition, the processor may be programmed to determine a steering direction, relative to the heading direction, by comparing the traveled trajectory to the predetermined road model trajectory at the current location of the autonomous vehicle.
Abstract:
Systems and methods are provided for vehicle navigation. The systems and methods may detect traffic lights. For example, one or more traffic lights may be detected using detection-redundant camera detection paths, a fusion of information from a traffic light transmitter and one or more cameras, based on contrast enhancement for night images, and based on low resolution traffic light candidate identification followed by high resolution candidate analysis. Additionally, the systems and methods may navigation based on a worst time to red estimation.
Abstract:
Systems and methods for navigating a host vehicle are disclosed. In one implementation at least one processor is programmed to receive at least one image captured by a camera from an environment of the host vehicle; analyze the at least one image to identity a representation of a lane of travel of the host vehicle along a road segment and a representation of at least one additional lane of travel along the road segment; analyze the at least one image to identify an attribute associated with the at least one additional lane of travel; determine, based on the attribute, information indicative of a characterization of the at least one additional lane of travel; and send the information indicative of the characterization of the at least one additional lane of travel to a server for use in updating a road navigation model.
Abstract:
Systems and methods use cameras to provide autonomous navigation features. In one implementation, a traffic light detection system is provided for a vehicle. One or more processing devices associated with the system receive at least one image of an area forward of the vehicle via a data interface, with the area including at least one traffic lamp fixture having at least one traffic light. The processing device(s) determine, based on at least one indicator of vehicle position, whether the vehicle is in a turn lane. Also, the processing device(s) process the received image(s) to determine the status of the traffic light, including whether the traffic light includes an arrow. Further, the system may cause a system response based on the determination of the status of the traffic light, whether the traffic light includes an arrow, and whether the vehicle is in a turn lane.
Abstract:
A navigation system may include a processor programmed to analyze a first image to identify a non-semantic road feature; identify a first image location, in the first image, of one point associated with the non-semantic road feature; analyze a second image to identify a representation of the non-semantic road feature in the second image; identify a second image location, in the second image, of the one point associated with the non-semantic road feature; determine, based on a difference between the first and second image locations and based on motion information for a vehicle between a capture of the first image and a capture of the second image, three-dimensional coordinates for the one point associated with the non-semantic road feature; and send the three-dimensional coordinates for the one point associated with the non-semantic road feature to a server for use in updating a road navigation model.
Abstract:
Systems and methods are provided for interacting with a plurality of autonomous vehicles. In one implementation, a system may include at least one processor. The at least one processor may be programmed to receive from each of the plurality of autonomous vehicles navigational situation information associated with an occurrence of an adjustment to a determined navigational maneuver, analyze the navigational situation information, determine, based on the analysis of the navigational situation information, whether the adjustment to the determined navigational maneuver was due to a transient condition, and update the predetermined model representative of the at least one road segment if the adjustment to the determined navigational maneuver was not due to a transient condition.
Abstract:
Systems and methods use cameras to provide autonomous navigation features. In one implementation, a traffic light detection system is provided for a vehicle. One or more processing devices associated with the system receive at least one image of an area forward of the vehicle via a data interface, with the area including at least one traffic lamp fixture having at least one traffic light. The processing device(s) determine, based on at least one indicator of vehicle position, whether the vehicle is in a turn lane. Also, the processing device(s) process the received image(s) to determine the status of the traffic light, including whether the traffic light includes an arrow. Further, the system may cause a system response based on the determination of the status of the traffic light, whether the traffic light includes an arrow, and whether the vehicle is in a turn lane.