Abstract:
Systems and methods of processing crowdsourced navigation information for use in autonomous vehicle navigation are disclosed. A method may include processing, by a mapping server, crowdsourced navigation information from a plurality of vehicles obtained by sensors coupled to the plurality of vehicles, wherein the navigation information describes road lanes of a road segment; collecting data about landmarks identified proximate to the road segment, the landmarking including a traffic sign; generating, by the mapping server, an autonomous vehicle map for the road segment, wherein the autonomous vehicle map includes a spline corresponding to a lane in the road segment and the landmarks identified proximate to the road segment; and distributing, by the mapping server, the autonomous vehicle map to an autonomous vehicle for use in autonomous navigation over the road segment.
Abstract:
The present disclosure relates to systems and methods for host vehicle navigation. In one implementation, a navigation system for a host vehicle may include at least one processing device programmed to receive, from a camera, a plurality of images representative of an environment of the host vehicle; receive, from a camera, a plurality of images representative of an environment of the host vehicle; analyze the images to identify a target vehicle in the environment of the host vehicle; cause a navigational change of the host vehicle to signal to the target vehicle an intent of the host vehicle to make a subsequent navigational maneuver; analyze the images to detect a change in a navigational state of the target vehicle; determine a navigational action for the host vehicle; and cause an adjustment of a navigational actuator of the host vehicle in response to the determined navigational action for the host vehicle.
Abstract:
A system for navigating a host vehicle may: receive, from an image capture device, an image representative of an environment of the host vehicle; determine a navigational action for accomplishing a navigational goal of the host vehicle; analyze the image to identify a target vehicle in the environment of the host vehicle; determine a next-state distance between the host vehicle and the target vehicle that would result if the navigational action was taken; determine a maximum braking capability of the host vehicle, a maximum acceleration capability of the host vehicle, and a speed of the host vehicle; determine a stopping distance for the host vehicle; determine a speed of the target vehicle and assume a maximum braking capability of the target vehicle; and implement the navigational action if the stopping distance for the host vehicle is less than the next-state distance summed together with a target vehicle travel distance.
Abstract:
An autonomous system includes a processing device programmed to receive, from an image capture device, an image of an environment of the host vehicle; detect an obstacle in the environment, based on an analysis of the image; monitor a driver input to at least one of a throttle control, a brake control, or a steering control associated with the host vehicle; determine whether the driver input results in the host vehicle navigating within a proximity buffer relative to the obstacle; allow the driver input to cause a corresponding change in one or more host vehicle motion control systems, if the processing device determines that the driver input would not result in the host vehicle navigating within the proximity buffer relative to the obstacle; and prevent the driver input to cause the change if the driver input results in the host vehicle navigating within the proximity buffer relative to the obstacle.
Abstract:
A system for navigating a host vehicle may receive an image representative of an environment of the host vehicle and determine a planned navigational action for accomplishing a navigational goal of the host vehicle. The system may identify a target vehicle, determine a current speed of the target vehicle, and assume a maximum braking rate capability of the target vehicle. The system may determine a next-state distance between the host vehicle and the target vehicle that would result if the planned navigational action was taken. The system may implement the planned navigational action if the host vehicle may be stopped using a predetermined sub-maximal braking rate within a distance that is less than the determined next-state distance summed together with a target vehicle travel distance determined based on the current speed of the target vehicle and the maximum braking rate capability of the target vehicle.
Abstract:
A method of estimating a time to collision (TTC) of a vehicle with an object comprising: acquiring a plurality of images of the object; and determining a TTC from the images that is responsive to a relative velocity and relative acceleration between the vehicle and the object.
Abstract:
Systems and methods are provided for autonomous navigation based on user intervention. In one implementation, a navigation system for a vehicle may include least one processor. The at least one processor may be programmed to receive from a camera, at least one environmental image associated with the vehicle, determine a navigational maneuver for the vehicle based on analysis of the at least one environmental image, cause the vehicle to initiate the navigational maneuver, receive a user input associated with a user's navigational response different from the initiated navigational maneuver, determine navigational situation information relating to the vehicle based on the received user input, and store the navigational situation information in association with information relating to the user input.
Abstract:
Systems and methods use cameras to provide autonomous navigation features. In one implementation, a traffic light detection system is provided for a vehicle. One or more processing devices associated with the system receive at least one image of an area forward of the vehicle via a data interface, with the area including at least one traffic lamp fixture having at least one traffic light. The processing device(s) determine, based on at least one indicator of vehicle position, whether the vehicle is in a turn lane. Also, the processing device(s) process the received image(s) to determine the status of the traffic light, including whether the traffic light includes an arrow. Further, the system may cause a system response based on the determination of the status of the traffic light, whether the traffic light includes an arrow, and whether the vehicle is in a turn lane.