Abstract:
In an embodiment, focusing an image-capture device such as, e.g., a camera including an optical system displaceable in opposite directions (A, B) via a focusing actuator, is controlled by evaluating a scale factor for the images acquired by the device. An accumulated value of the variations of the scale factor over a time interval (e.g., over a number of frames) is produced and the absolute value thereof is compared against a threshold. If the threshold is reached, which may be indicative of a zoom movement resulting in image de-focusing, a refocusing action is activated by displacing the optical system via the focusing actuator in the one or the other of the opposite focusing directions (A or B) as a function of whether the accumulated value exhibits an increase or a decrease (i.e., whether the accumulated value is positive or negative).
Abstract:
A sequence of images is processed to generate optical flow data including a list of motion vectors. The motion vectors are grouped based on orientation into a first set of moving away motion vectors and a second set of moving towards motion vectors. A vanishing point is determined as a function of the first set of motion vectors and a center position of the images is determined. Pan and tilt information is computed from the distance difference between the vanishing point and the center position. Approaching objects are identified from the second set as a function of position, length and orientation, thereby identifying overtaking vehicles. Distances to the approaching objects are determined from object position, camera focal length, and pan and tilt information. A warning signal is issued as a function of the distances.
Abstract:
According to an embodiment, a sequence of video frames as produced in a video-capture apparatus such as a video camera is stabilized against hand shaking or vibration by:—subjecting a pair of frames in the sequence to feature extraction and matching to produce a set of matched features;—subjecting the set of matched features to an outlier removal step; and—generating stabilized frames via motion-model estimation based on features resulting from outlier removal. Motion-model estimation is performed based on matched features having passed a zone-of-interest test confirmative that the matched features passing the test are distributed over a plurality of zones across the frames.
Abstract:
A method and system for filtering an image frame of a video sequence from spurious motion, comprising the steps of dividing the image frame and a preceding image frame of the video sequence into blocks of pixels; determining motion vectors for the blocks of the image frame; determining inter-frame transformation parameters for the image frame based on the determined motion vectors; and generating a filtered image frame based on the determined inter-frame transformation parameters; wherein the image frame is dived into overlapping blocks.
Abstract:
An embodiment relates to a method for the detection of texture of a digital image, including providing a raw data image of the image by means of Bayer image sensors, determining noise in at least a region of the raw data image and determining the texture based on the determined noise without using a high pass or low pass filter.