Abstract:
A system mounted in a vehicle for classifying light sources. The system includes a lens and a spatial image sensor. The lens is adapted to provide an image of a light source on the spatial image sensor. A diffraction grating is disposed between the lens and the light source. The diffraction grating is adapted for providing a spectrum. A processor is configured for classifying the light source as belonging to a class selected from a plurality of classes of light sources expected to be found in the vicinity of the vehicle, wherein the spectrum is used for the classifying of the light source. Both the image and the spectrum may be used for classifying the light source or the spectrum is used for classifying the light source and the image is used for another driver assistance application.
Abstract:
A navigation system may include at least one processing device configured to determine, based on an output of one or more position sensors associated with the navigation system, a current location of at least one component associated with the navigation system and determine a destination location different from the current location. The navigation system may also acquire, from one or more image acquisition devices, a plurality of images representative of an environment of a user of the navigation system and derive, from the plurality of images, visual information associated with at least one object in the environment. The system may also determine one or more instructions for navigating from the current location to the destination location, wherein the one or more instructions include at least one reference to the visual information derived from the plurality of images. The system may also deliver to the user the one or more instructions.
Abstract:
A navigation system may include at least one processing device configured to determine, based on an output of one or more position sensors associated with the navigation system, a current location of at least one component associated with the navigation system and determine a destination location different from the current location. The navigation system may also acquire, from one or more image acquisition devices, a plurality of images representative of an environment of a user of the navigation system and derive, from the plurality of images, visual information associated with at least one object in the environment. The system may also determine one or more instructions for navigating from the current location to the destination location, wherein the one or more instructions include at least one reference to the visual information derived from the plurality of images. The system may also deliver to the user the one or more instructions.
Abstract:
A vehicle navigation system may comprise a memory including instructions and circuitry configured by the instructions to identify a target vehicle in an environment of a vehicle that includes the vehicle navigation system. The circuitry may receive image data of the target vehicle from an image capture device of the vehicle; identify, based on analysis of the image data, a situational characteristic of the target vehicle including an indication that the target vehicle is traveling behind an additional vehicle traveling slower than the target vehicle; and change a navigational state of the vehicle to allow an action of the target vehicle. The vehicle may be configured to cause the change in the navigational state based on a determination that the situational characteristic indicates that the target vehicle would benefit from the change in the navigational state.
Abstract:
Systems and methods use cameras to provide autonomous navigation features. In one implementation, a driver-assist object detection system is provided for a vehicle. One or more processing devices associated with the system receive at least two images from a plurality of captured images via a data interface. The device(s) analyze the first image and at least a second image to determine a reference plane corresponding to the roadway the vehicle is traveling on. The processing device(s) locate a target object in the first two images, and determine a difference in a size of at least one dimension of the target object between the two images. The system may use the difference in size to determine a height of the object. Further, the system may cause a change in at least a directional course of the vehicle if the determined height exceeds a predetermined threshold.