Abstract:
System and method to send map data to a vehicle based on the potential travel envelope of the vehicle. The shape of the envelope is determined based on the speed, location and direction of travel of the vehicle
Abstract:
Systems and methods are provided for vehicle navigation. In one implementation, a navigation system for a host vehicle includes at least one processor programmed to: receive, from a camera of the host vehicle, one or more images captured from an environment of the host vehicle; analyze the one or more images to detect an indicator of an intersection; determine, based on output received from at least one sensor of the host vehicle, a stopping location of the host vehicle relative to the detected intersection; analyze the one or more images to determine an indicator of whether one or more other vehicles are in front of the host vehicle; and send the stopping location of the host vehicle and the indicator of whether one or more other vehicles are in front of the host vehicle to a server for use in updating a road navigation model.
Abstract:
Systems and methods are disclosed for navigating a host vehicle. In one implementation, at least one processing device may be programmed to receive an image representative of an environment of the host vehicle, determine a planned navigational action for the host vehicle, analyze the image to identify a target vehicle with a direction of travel toward the host vehicle, and determine a next-state distance between the host vehicle and the target vehicle that would result if the planned navigational action was taken. The at least one processing device may further determine a stopping distance for the host vehicle based on a braking rate, a maximum acceleration capability, and a current speed of the host vehicle, determine a stopping distance for the target vehicle based on a braking rate, a maximum acceleration capability, and a current speed of the target vehicle, and implement the planned navigational action if the determined next-state distance is greater than a sum of the stopping distances for the host vehicle and the target vehicle.
Abstract:
System and techniques for vehicle environment modeling with a camera are described herein, A time-ordered sequence of images representative of a road surface may be obtained. An image from this sequence is a current image. A data set may then be provided to an artificial neural network (ANN) to produce a three-dimensional structure of a scene. Here, the data set includes a portion of the sequence of images that includes the current image, motion of the sensor from which the images were obtained, and an epipole. The road surface is then modeled using the three-dimensional structure of the scene.
Abstract:
A system for navigating a host vehicle may include a at least one processing device. The at least one processing device may be programmed to receive, from an image capture device, at least one image representative of an environment of the host vehicle. The at least one processing device may also be programmed to analyze the at least one image to identify an object in the environment of the host vehicle. The at least one processing device may also be programmed to determine a location of the host vehicle. The at least one processing device may also be programmed to receive map information associated with the determined location of the host vehicle, wherein the map information includes elevation information associated with the environment of the host vehicle. The at least one processing device may also be programmed to determine a distance from the host vehicle to the object based on at least the elevation information. The at least one processing device may further be programmed to determine a navigational action for the host vehicle based on the determined distance.
Abstract:
An imaging system is provided for a vehicle. In one implementation, the imaging system includes an imaging module, a first camera coupled to the imaging module, a second camera coupled to the imaging module, and a mounting assembly configured to attach the imaging module to the vehicle such that the first and second camera face outward with respect to the vehicle. The first camera has a first field of view and a first optical axis, and the second camera has a second field of view and a second optical axis. The first optical axis crosses the second optical axis in at least one crossing point of a crossing plane. The first camera is focused a first horizontal distance beyond the crossing point of the crossing plane and the second camera is focused a second horizontal distance beyond the crossing point of the crossing plane.
Abstract:
In one embodiment, a navigation system (100) for a host vehicle (200) may comprise, at least one processing device (110). The processing device (110) may be programmed to receive a plurality of images representative of an environment of the host vehicle (200). The processing device (110) may also be programmed to analyze the plurality of images to identify at least one navigational state of the host vehicle (200). The processing device (110) may also be programmed to identify a jurisdiction based on at least one indicator of a location of the host vehicle (200), the at least one indicator based at least in part on an analysis of the plurality of images. The processing device (110) may also be programmed to determine at least one navigational rule specific to the identified jurisdiction. The processing device (110) may also be programmed to cause a navigational change based on the identified navigational state and based on the determined at least one navigational rule.
Abstract:
In one embodiment, a navigation system for a host vehicle may comprise, at least one processing device. The processing device may be programmed to receive a plurality of images representative of an environment of the host vehicle. The processing device may also be programmed to analyze the plurality of images to identify at least one navigational state of the host vehicle. The processing device may also be programmed to identify a jurisdiction based on at least one indicator of a location of the host vehicle, the at least one indicator based at least in part on an analysis of the plurality of images. The processing device may also be programmed to determine at least one navigational rule specific to the identified jurisdiction. The processing device may also be programmed to cause a navigational change based on the identified navigational state and based on the determined at least one navigational rule.
Abstract:
Systems and methods are provided for navigating an autonomous vehicle. In one implementation, a system includes a processing device programmed to receive a plurality of images representative of an environment of the host vehicle. The environment includes a road on which the host vehicle is traveling. The at least one processing device is further programmed to analyze the images to identify a target vehicle traveling in a lane of the road different from a lane in which the host vehicle is traveling; analyze the images to identify a lane mark associated with the lane in which the target vehicle is traveling; detect lane mark characteristics of the identified lane mark; use the detected lane mark characteristics to determine a type of the identified lane mark; determine a characteristic of the target vehicle; and determine a navigational action for the host vehicle based on the determined lane mark type and the determined characteristic of the target vehicle.
Abstract:
Systems and methods are provided for determining a road profile along a predicted path. In one implementation, a system includes at least one image capture device configured to acquire a plurality of images of an area in a vicinity of a user vehicle; a data interface; and at least one processing device configured to receive the plurality of images captured by the image capture device through the data interface; and compute a profile of a road along one or more predicted paths of the user vehicle. At least one of the one or more predicted paths is predicted based on image data.