Abstract:
Image defects in digital images are easily detectable by the human eye but may be difficult to detect in a computer-implemented fashion. In an embodiment of a digital-image-acquisition device, defects are removed on the CFA domain before color interpolation takes place. In order to allow cancellation of couplets of defective pixels, a two pass embodiment is presented. Such embodiment presents methods and systems that can remove both couplets and singlets without damaging the image. The system includes a ring corrector that detects a defect in the ring of pixels that surround a central pixel, a singlet corrector that detects and corrects the central pixel and removes a couplet if the ring corrector is activated, whereas if the ring corrector is switched off, the singlet corrector only removes singlets, and a peak-and-valley detector that avoids overcorrection by avoiding correcting signal peaks or valleys in case of spikes or drops in signal.
Abstract:
An embodiment relates to a method for color processing of an input image, the method including the steps of low-pass filtering of the input image to obtain a low-pass component, high-pass filtering of the input image to obtain a high-pass component, processing the input image for edge detection to obtain edginess parameters, and performing a color-space transformation of the input image based on the low-pass component, the high-pass component, and the edginess parameters.
Abstract:
An embodiment includes a method and an apparatus for the generation of a visual story board in real time in an image-capturing device including a photo sensor and a buffer, wherein the method includes the consecutively performed steps: starting the recording of a video, receiving information on an image frame of the video, comparing the information on the received image frame with information on at least one of a plurality of image frames wherein the information on the plurality of image frames has previously been stored in the buffer, storing the information on the received image frame in the buffer depending on the result of the comparison, and finishing the recording of the video.
Abstract:
According to an embodiment, a sequence of video frames as produced in a video-capture apparatus such as a video camera is stabilized against hand shaking or vibration by:—subjecting a pair of frames in the sequence to feature extraction and matching to produce a set of matched features;—subjecting the set of matched features to an outlier removal step; and—generating stabilized frames via motion-model estimation based on features resulting from outlier removal. Motion-model estimation is performed based on matched features having passed a zone-of-interest test confirmative that the matched features passing the test are distributed over a plurality of zones across the frames.
Abstract:
An image processing system has one or more memories and image processing circuitry coupled to the one or more memories. The image processing circuitry, in operation, compares a first image to feature data in a comparison image space using a matching model. The comparing includes: unwarping keypoints in keypoint data of the first image; and comparing the unwarped keypoints and descriptor data associated with the first image to the feature data of the comparison image. The image processing circuitry determines whether the first image matches the comparison image based on the comparing.
Abstract:
Image processing circuitry processes image frames in a sequence of image frames, for example, to identify objects of interest. The processing includes filtering motion vectors associated with a current image frame, grouping the filtered motion vectors associated with the current image frame into a set of clusters associated with the current image frame, and selectively merging clusters in the set of clusters associated with the current image frame. At least one of the filtering, the grouping and the merging may be based on one or more clusters associated with one or more previous image frames in the sequence of image frames. Motion vectors included in merged clusters associated with a previous frame may be added to filtered motion vectors before grouping the motion vectors in the current frame.
Abstract:
An embodiment is a method for detecting image features, the method including extracting a stripe from a digital image, the stripe including of a plurality of blocks; processing the plurality of blocks for localizing one or more keypoints; and detecting one or more image features based on the one or more localized keypoints.