Abstract:
A method is provided using a system mounted in a vehicle. The system includes a rear-viewing camera and a processor attached to the rear-viewing camera. When the driver shifts the vehicle into reverse gear, and while the vehicle is still stationary, image frames from the immediate vicinity behind the vehicle are captured. The immediate vicinity behind the vehicle is in a field of view of the rear-viewing camera. The image frames are processed and thereby the object is detected which if present in the immediate vicinity behind the vehicle would obstruct the motion of the vehicle. The processing is preferably performed in parallel for a plurality of classes of obstructing objects using a single image frame of the image frames.
Abstract:
Systems and methods are provided for determining a road profile along a predicted path. In one implementation, a system includes at least one image capture device configured to acquire a plurality of images of an area in a vicinity of a user vehicle; a data interface; and at least one processing device configured to receive the plurality of images captured by the image capture device through the data interface; and compute a profile of a road along one or more predicted paths of the user vehicle. At least one of the one or more predicted paths is predicted based on image data.
Abstract:
Computerized methods are performable by a driver assistance system while the host vehicle is moving. The driver assistance system includes a camera connectible to a processor. First and second image frames are captured from the field of view of the camera. Corresponding image points of the road are tracked from the first image frame to the second image frame. Image motion between the corresponding image points of the road is processed to detect a hazard in the road. The corresponding image points are determined to be of a moving shadow cast on the road to avoid a false positive detection of a hazard in the road or the corresponding image points are determined not to be of a moving shadow cast on the road to verify detection of a hazard in the road.
Abstract:
A method of estimating a time to collision (TTC) of a vehicle with an object comprising: acquiring a plurality of images of the object; and determining a TTC from the images that is responsive to a relative velocity and relative acceleration between the vehicle and the object.
Abstract:
Systems and methods are provided for detecting and responding to cut in vehicles, and for navigating while taking into consideration an altruistic behavior parameter. In one implementation, a vehicle cut in detection and response system for a host vehicle system may include a data interface and at least one processing device. The at least one processing device may be programmed to receive, via the data interface, a plurality of images from at least one image capture device associated with the host vehicle; identify, in the plurality of images, a representation of a target vehicle traveling in a first lane different from a second lane in which the host vehicle is traveling; identify, based on analysis of the plurality of images, at least one indicator that the target vehicle will change from the first lane to the second lane; detect whether at least one predetermined cut in sensitivity change factor is present in an environment of the host vehicle; cause a first navigational response in the host vehicle based on the identification of the at least one indicator and based on a value associated with a first cut in sensitivity parameter where no predetermined cut in sensitivity change factor is detected; and cause a second navigational response in the host vehicle based on the identification of the at least one indicator and based on a value associated with a second cut in sensitivity parameter where the at least one predetermined cut in sensitivity change factor is detected, the second cut in sensitivity parameter being different from the first cut in sensitivity parameter.
Abstract:
Systems and methods are provided for detecting and responding to cut in vehicles, and for navigating while taking into consideration an altruistic behavior parameter. In one implementation, a navigation system for a host vehicle may include a data interface and at least one processing device. The at least one processing device may be programmed to receive, via the data interface, a plurality of images from at least one image capture device associated with the host vehicle; identify, based on analysis of the plurality of images, at least one target vehicle in an environment of the host vehicle; determine, based on analysis of the plurality of images, one or more situational characteristics associated with the target vehicle; determine a current value associated with an altruistic behavior parameter; and determine based on the one or more situational characteristics associated with the target vehicle that no change in a navigation state of the host vehicle is required, but cause at least one navigational change in the host vehicle based on the current value associated with the altruistic behavior parameter and based on the one or more situational characteristics associated with the target vehicle.
Abstract:
An imaging system for a vehicle may include a first image capture device having a first field of view and configured to acquire a first image relative to a scene associated with the vehicle, the first image being acquired as a first series of image scan lines captured using a rolling shutter. The imaging system may also include a second image capture device having a second field of view different from the first field of view and that at least partially overlaps the first field of view, the second image capture device being configured to acquire a second image relative to the scene associated with the vehicle, the second image being acquired as a second series of image scan lines captured using a rolling shutter. As a result of overlap between the first field of view and the second field of view, a first overlap portion of the first image corresponds with a second overlap portion of the second image. The first image capture device has a first scan rate associated with acquisition of the first series of image scan lines that is different from a second scan rate associated with acquisition of the second series of image scan lines, such that the first image capture device acquires the first overlap portion of the first image over a period of time during which the second overlap portion of the second image is acquired.
Abstract:
Computerized methods are performable by a driver assistance system while the host vehicle is moving. The driver assistance system includes a camera connectible to a processor. First and second image frames are captured from the field of view of the camera. Corresponding image points of the road are tracked from the first image frame to the second image frame. Image motion between the corresponding image points of the road is processed to detect a hazard in the road. The corresponding image points are determined to be of a moving shadow cast on the road to avoid a false positive detection of a hazard in the road or the corresponding image points are determined not to be of a moving shadow cast on the road to verify detection of a hazard in the road.
Abstract:
The present disclosure relates to navigational systems for vehicles. In one implementation, such a navigational system may receive a plurality of images captured by an image capture device onboard the host vehicle, the plurality of images being associated with an environment of the host vehicle; analyze at least one of the plurality of images to identify the target vehicle and at least one wheel component on a side of the target vehicle; determine, based on an analysis of at least two of the plurality of images, a rotation of the at least one wheel component of the target vehicle; and cause at least one navigational change of the host vehicle based on the rotation of the at least one wheel component of the target vehicle.
Abstract:
System and techniques for vehicle environment modeling with a camera are described herein. A device for modeling an environment comprises: a hardware sensor interface to obtain a sequence of unrectified images representative of a road environment, the sequence of unrectified images including a first unrectified image, a previous unrectified image, and a previous-previous unrectified image; and processing circuitry to: provide the first unrectified image, the previous unrectified image, and the previous-previous unrectified image to an artificial neural network (ANN) to produce a three-dimensional structure of a scene; determine a selected homography; and apply the selected homography to the three-dimensional structure of the scene to create a model of the road environment.