Abstract:
A system mountable in a vehicle to provide object detection in the vicinity of the vehicle. The system includes a camera operatively attached to a processor. The camera is mounted externally at the rear of the vehicle. The field of view of the camera is substantially in the forward direction of travel of the vehicle along the side of the vehicle. Multiple image frames are captured from the camera. Yaw of the vehicle may be input or the yaw may be computed from the image frames. Respective portions of the image frames are selected responsive to the yaw of the vehicle. The image frames are processed to detect thereby an object in the selected portions of the image frames.
Abstract:
Systems and methods are provided for navigating an autonomous vehicle. In one implementation, a system for detecting whether a road on which a host vehicle travels is a one-way road may include at least one processing device. The processing device may be programmed to receive at least one image associated with an environment of the host vehicle, identify a first plurality of vehicles on a first side of the road, identify a second plurality of vehicles on a second side of the road, determine a first facing direction associated with the first plurality of vehicles, determine a second facing direction associated with the second plurality of vehicles, and cause at least one navigational change of the host vehicle when the first facing direction and the second facing direction are both opposite to a heading direction of the host vehicle.
Abstract:
Systems and methods use cameras to provide autonomous navigation features. In one implementation, a driver-assist object detection system is provided for a vehicle. One or more processing devices associated with the system receive at least two images from a plurality of captured images via a data interface. The device(s) analyze the first image and at least a second image to determine a reference plane corresponding to the roadway the vehicle is traveling on. The processing device(s) locate a target object in the first two images, and determine a difference in a size of at least one dimension of the target object between the two images. The system may use the difference in size to determine a height of the object. Further, the system may cause a change in at least a directional course of the vehicle if the determined height exceeds a predetermined threshold.
Abstract:
A system mounted on a vehicle for detecting an obstruction on a surface of a window of the vehicle, a primary camera is mounted inside the vehicle behind the window. The primary camera is configured to acquire images of the environment through the window. A secondary camera is focused on an external surface of the window, and operates to image the obstruction. A portion of the window, i.e. window region is subtended respectively by the field of view of the primary camera and the field of view of the secondary camera. A processor processes respective sequences of image data from both the primary camera and the secondary camera.
Abstract:
A computerized system mountable on a vehicle operable to detect an object by processing first image frames from a first camera and second image frames from a second camera. A first range is determined to said detected object using the first image frames. An image location is projected of the detected object in the first image frames onto an image location in the second image frames. A second range is determined to the detected object based on both the first and second image frames. The detected object is tracked in both the first and second image frames When the detected object leaves a field of view of the first camera, a third range is determined responsive to the second range and the second image frames.
Abstract:
A system for navigating a host vehicle may include a at least one processing device. The at least one processing device may be programmed to receive, from an image capture device, at least one image representative of an environment of the host vehicle. The at least one processing device may also be programmed to analyze the at least one image to identify an object in the environment of the host vehicle. The at least one processing device may also be programmed to determine a location of the host vehicle. The at least one processing device may also be programmed to receive map information associated with the determined location of the host vehicle, wherein the map information includes elevation information associated with the environment of the host vehicle. The at least one processing device may also be programmed to determine a distance from the host vehicle to the object based on at least the elevation information. The at least one processing device may further be programmed to determine a navigational action for the host vehicle based on the determined distance.
Abstract:
A method is provided using a system mounted in a vehicle. The system includes a rear-viewing camera and a processor attached to the rear-viewing camera. When the driver shifts the vehicle into reverse gear, and while the vehicle is still stationary, image frames from the immediate vicinity behind the vehicle are captured. The immediate vicinity behind the vehicle is in a field of view of the rear-viewing camera. The image frames are processed and thereby the object is detected which if present in the immediate vicinity behind the vehicle would obstruct the motion of the vehicle. The processing is preferably performed in parallel for a plurality of classes of obstructing objects using a single image frame of the image frames.
Abstract:
Systems and methods are provided for navigating an autonomous vehicle. In one implementation, a system for navigating a host vehicle based on movement of a target vehicle toward a lane being traveled by the host vehicle may include at least one processing device. The processing device may be programmed to receive a plurality of images associated with an environment of the host vehicle, analyze at least one of the plurality of images to identify the target vehicle and at least one wheel component on a side of the target vehicle, analyze a region including the at least one wheel component of the target vehicle to identify motion associated with the at least one wheel component of the target vehicle, and cause at least one navigational change of the host vehicle based on the identified motion associated with the at least one wheel component.
Abstract:
Systems and methods are provided for interacting with a plurality of autonomous vehicles. In one implementation, a navigation system for a vehicle may include a memory including a predetermined road model representative of at least one road segment and at least one processor. The at least one processor may be programmed to selectively receive, from the plurality of autonomous vehicles, road environment information based on navigation by the plurality of autonomous vehicles through their respective road environments, determine whether one or more updates to the predetermined road model are required based on the road environment information, and update the predetermined road model to include the one or more updates.
Abstract:
Systems and methods are provided for determining a road profile along a predicted path. In one implementation, a system includes at least one image capture device configured to acquire a plurality of images of an area in a vicinity of a user vehicle; a data interface; and at least one processing device configured to receive the plurality of images captured by the image capture device through the data interface; and compute a profile of a road along one or more predicted paths of the user vehicle. At least one of the one or more predicted paths is predicted based on image data.