Abstract:
An image acquisition device having a wide field of view includes a lens and image sensor configured to capture an original wide field of view (WFoV) image with a field of view of more than 90°. The device has an object detection engine that includes one or more cascades of object classifiers, e.g., face classifiers. A WFoV correction engine may apply rectilinear and/or cylindrical projections to pixels of the WFoV image, and/or non-linear, rectilinear and/or cylindrical lens elements or lens portions serve to prevent and/or correct distortion within the original WFoV image. One or more objects located within the original and/or distortion-corrected WFoV image is/are detectable by the object detection engine upon application of the one or more cascades of object classifiers.
Abstract:
A method of combining image data from multiple frames to enhance one or more parameters of digital image quality, e.g., video or still images, includes acquiring a first image at a first exposure duration, as well as acquiring a second image at a second exposure duration shorter than the first exposure duration and at a time just before, just after or overlapping in time with acquiring the first image, such that the first and second images include approximately a same first scene. In this way, the second image is relatively sharp and under-exposed, while the first image is relatively well-exposed and less sharp than the second image. Brightness and/or color information are extracted from the first image and applied to the second image to generate an enhanced version of the second image.
Abstract:
Classifier chains are used to determine quickly and accurately if a window or sub- window of an image contains a right face, a left face, a full face, or does not contain a face. After acquiring a digital image, an integral image is calculated based on the acquired digital image. Left-face classifiers are applied to the integral image to determine the probability that the window contains a left face. Right-face classifiers are applied to the integral image to determine the probability that the window contains a right face. If the probability of the window containing a right face and a left face are both greater than threshold values, then it is determined that the window contains a full face. Alternatively, if only one of the probabilities exceeds a threshold value, then it may be determined that the window contains only a left face or a right face.
Abstract:
A method and device is provided for adjusting the white balance of a digital image by adjusting the values assigned to the red, green, and blue subpixels of a pixel in the image. The adjustment to the subpixels is determined by identifying pixels in the image that have an RGB product greater than a threshold value, wherein the threshold value is based at least in part on an average of the RGB products of each pixel in the image and a variance between the RGB products of the pixels and the average of the RGB products.
Abstract:
A technique of generating a panoramic image involves acquiring a set of at least two main image frames of overlapping portions of a scene. A map or other information is stored relating to the joining of the image frames. A main panorama image is formed by joining main image frames based on the map or information gained in a similar low-res process.
Abstract:
A face is detected and identified within an acquired digital image. One or more features of the face is/are extracted from the digital image, including two independent eyes or subsets of features of each of the two eyes, or lips or partial lips or one or more other mouth features and one or both eyes, or both. A model including multiple shape parameters is applied to the two independent eyes or subsets of features of each of the two eyes, and/or to the lips or partial lips or one or more other mouth features and one or both eyes. One or more similarities between the one or more features of the face and a library of reference feature sets is/are determined. A probable facial expression is identified based on the determining of the one or more similarities.
Abstract:
Providing for a fixed focus optical system exhibiting extended depth of field is provided herein. By way of example, a compact and fast optical system that yields an asymmetric modulation transfer function (MTF) is disclosed. In some aspects, the asymmetric MTF results in extended depth of field for near field objects. Such a response can be particularly beneficial for small handheld cameras or camera modules having high resolution. According to some disclosed aspects, the resolution can be about 8 mega pixels. Additionally, the optical system can comprise four lenses in one aspect and five lenses in another, while remaining below about 5.3 mm total track length (TTL) for the respective systems. In at least one application, the disclosed optical systems can be employed for a high resolution compact camera, for instance in conjunction with an electronic computing device, communication device, display device, surveillance equipment, or the like.
Abstract:
A technique is provided for recognizing faces in an image stream using a digital image acquisition device. A first acquired image is received from an image stream. A first face region is detected within the first acquired image having a given size and a respective location within the first acquired image. First faceprint data uniquely identifying the first face region are extracted along with first peripheral region data around the first face region. The first faceprint and peripheral region data are stored, and the first peripheral region data are associated with the first face region. The first face region is tracked until a face lock is lost. A second face region is detected within a second acquired image from the image stream. Second peripheral region data around the second face region are extracted. The second face region is identified upon matching the first and second peripheral region data.
Abstract:
Foreground and background regions of a digital image of a scene are distinguished from each other automatically. Foreground objects are identified in a binary image map that distinguishes between foreground pixels and background pixels. From the foreground objects, a primary foreground object is identified. Within the identified primary foreground object, a head region of the primary foreground object is located. Within the head region, patterns of foreground pixels and background pixels that are indicative of a head crown region are identified. Within the head crown region, pixels identified as background pixels that actually show portions of the primary foreground object are converted to foreground pixels, thus improving the accuracy of the binary image map.