Abstract:
Image upscaling techniques are described. These techniques may include use of iterative and adjustment upscaling techniques to upscale an input image. A variety of functionality may be incorporated as part of these techniques, examples of which include content-adaptive patch finding techniques that may be employed to give preference to an in-place patch to minimize structure distortion. In another example, content metric techniques may be employed to assign weights for combining patches. In a further example, algorithm parameters may be adapted with respect to algorithm iterations, which may be performed to increase efficiency of computing device resource utilization and speed of performance. For instance, algorithm parameters may be adapted to enforce a minimum and/or maximum number to iterations, cease iterations for image sizes over a threshold amount, set sampling step sizes for patches, employ techniques based on color channels (which may include independence and joint processing techniques), and so on.
Abstract:
Patch partition and image processing techniques are described. In one or more implementations, a system includes one or more modules implemented at least partially in hardware. The one or more modules are configured to perform operations including grouping a plurality of patches taken from a plurality of training samples of images into respective ones of a plurality of partitions, calculating an image processing operator for each of the partitions, determining distances between the plurality of partitions that describe image similarity of patches of the plurality of partitions, one to another, and configuring a database to provide the determined distance and the image processing operator to process an image in response to identification of a respective partition that corresponds to a patch taken from the image.
Abstract:
Techniques for facial expression capture for character animation are described. In one or more implementations, facial key points are identified in a series of images. Each image, in the series of images, is normalized from the identified facial key points. Facial features are determined from each of the normalized images. Then a facial expression is classified, based on the determined facial features, for each of the normalized images. In additional implementations, a series of images are captured that include performances of one or more facial expressions. The facial expressions in each image of the series of images are classified by a facial expression classifier. Then the facial expression classifications are used by a character animator system to produce a series of animated images of an animated character that include animated facial expressions that are associated with the facial expression classification of the corresponding image in the series of images.
Abstract:
Systems and methods are provided for image enhancement using self-examples in combination with external examples. In one embodiment, an image manipulation application receives an input image patch of an input image. The image manipulation application determines a first weight for an enhancement operation using self-examples and a second weight for an enhancement operation using external examples. The image manipulation application generates a first interim output image patch by applying the enhancement operation using self-examples to the input image patch and a second interim output image patch by applying the enhancement operation using external examples to the input image patch. The image manipulation application generates an output image patch by combining the first and second interim output image patches as modified using the first and second weights.
Abstract:
Techniques are disclosed for removing blur from a single image by accumulating a blur kernel estimation across several scale levels of the image and balancing the contributions of the different scales to the estimation depending on the noise level in each observation. In particular, a set of observations can be obtained by applying a set of variable scale filters to a single blurry image at different scale levels. A single blur kernel can be estimated across all scales from the set of observations and used to obtain a single latent sharp image. The estimation at a large scale level is refined using the observations at successively smaller scale levels. The filtered observations may be weighted during the estimation to balance the contributions of each scale to the estimation of the blur kernel. A deblurred digital image is recovered by deconvolving the blurry digital image using the estimated blur kernel.
Abstract:
Techniques for detecting and recognizing text may be provided. For example, an image may be analyzed to detect and recognize text therein. The analysis may involve detecting text components in the image. For example, multiple color spaces and multiple-stage filtering may be applied to detect the text components. Further, the analysis may involve extracting text lines based on the text components. For example, global information about the text components can be analyzed to generate best-fitting text lines. The analysis may also involve pruning and splitting the text lines to generate bounding boxes around groups of text components. Text recognition may be applied to the bounding boxes to recognize text therein.
Abstract:
Techniques for detecting and recognizing text may be provided. For example, an image may be analyzed to detect and recognize text therein. The analysis may involve detecting text components in the image. For example, multiple color spaces and multiple-stage filtering may be applied to detect the text components. Further, the analysis may involve extracting text lines based on the text components. For example, global information about the text components can be analyzed to generate best-fitting text lines. The analysis may also involve pruning and splitting the text lines to generate bounding boxes around groups of text components. Text recognition may be applied to the bounding boxes to recognize text therein.
Abstract:
Multi-feature image haze removal is described. In one or more implementations, feature maps are extracted from a hazy image of a scene. The feature maps convey information about visual characteristics of the scene captured in the hazy image. Based on the feature maps, portions of light that are not scattered by the atmosphere and are captured to produce the hazy image are computed. Additionally, airlight of the hazy image is ascertained based on at least one of the feature maps. The calculated airlight represents constant light of the scene. Using the computed portions of light and the ascertained airlight, a dehazed image is generated from the hazy image.
Abstract:
In techniques for adaptive denoising with internal and external patches, example image patches taken from example images are grouped into partitions of similar patches, and a partition center patch is determined for each of the partitions. An image denoising technique is applied to image patches of a noisy image to generate modified image patches, and a closest partition center patch to each of the modified image patches is determined. The image patches of the noisy image are then classified as either a common patch or a complex patch of the noisy image, where an image patch is classified based on a distance between the corresponding modified image patch and the closest partition center patch. A denoising operator can be applied to an image patch based on the classification, such as applying respective denoising operators to denoise the image patches that are classified as the common patches of the noisy image.
Abstract:
Image classification techniques using images with separate grayscale and color channels are described. In one or more implementations, an image classification network includes grayscale filters and color filters which are separate from the grayscale filters. The grayscale filters are configured to extract grayscale features from a grayscale channel of an image, and the color filters are configured to extract color features from a color channel of the image. The extracted grayscale features and color features are used to identify an object in the image, and the image is classified based on the identified object.