Abstract:
Systems and methods are provided for content-based selection of style examples used in image stylization operations. For example, training images can be used to identify example stylized images that will generate high-quality stylized images when stylizing input images having certain types of semantic content. In one example, a processing device determines which example stylized images are more suitable for use with certain types of semantic content represented by training images. In response to receiving or otherwise accessing an input image, the processing device analyzes the semantic content of the input image, matches the input image to at least one training image with similar semantic content, and selects at least one example stylized image that has been previously matched to one or more training images having that type of semantic content. The processing device modifies color or contrast information for the input image using the selected example stylized image.
Abstract:
Systems and methods are disclosed herein for 3-Dimensional portrait reconstruction from a single photo. A face portion of a person depicted in a portrait photo is detected and a 3-Dimensional model of the person depicted in the portrait photo constructed. In one embodiment, constructing the 3-Dimensional model involves fitting hair portions of the portrait photo to one or more helices. In another embodiment, constructing the 3-Dimensional model involves applying positional and normal boundary conditions determined based on one or more relationships between face portion shape and hair portion shape. In yet another embodiment, constructing the 3-Dimensional model involves using shape from shading to capture fine-scale details in a form of surface normals, the shape from shading based on an adaptive albedo model and/or a lighting condition estimated based on shape fitting the face portion.
Abstract:
Image editing techniques are disclosed that support a number of physically-based image editing tasks, including object insertion and relighting. The techniques can be implemented, for example in an image editing application that is executable on a computing system. In one such embodiment, the editing application is configured to compute a scene from a single image, by automatically estimating dense depth and diffuse reflectance, which respectively form the geometry and surface materials of the scene. Sources of illumination are then inferred, conditioned on the estimated scene geometry and surface materials and without any user input, to form a complete 3D physical scene model corresponding to the image. The scene model may include estimates of the geometry, illumination, and material properties represented in the scene, and various camera parameters. Using this scene model, objects can be readily inserted and composited into the input image with realistic lighting, shadowing, and perspective.
Abstract:
The present disclosure is directed toward systems and methods for image patch matching. In particular, the systems and methods described herein sample image patches to identify those image patches that match a target image patch. The systems and methods described herein probabilistically accept image patch proposals as potential matches based on an oracle. The oracle is computationally inexpensive to evaluate but more approximate than similarity heuristics. The systems and methods use the oracle to quickly guide the search to areas of the search space more likely to have a match. Once areas are identified that likely include a match, the systems and methods use a more accurate similarity function to identify patch matches.
Abstract:
Certain embodiments involve using labels to track high-frequency offsets for patch-matching. For example, a processor identifies an offset between a first source image patch and a first target image patch. If the first source image patch and the first target image patch are sufficiently similar, the processor updates a data structure to include a label specifying the offset. The processor associates, via the data structure, the first source image patch with the label. The processor subsequently selects certain high-frequency offsets, including the identified offset, from frequently occurring offsets in the data structure. The processor uses these offsets to identify a second target image patch, which is located at the identified offset from a second source image patch. The processor associates, via the data structure, the second source image patch with the identified offset based on a sufficient similarity between the second source image patch and the second target image patch.
Abstract:
Certain embodiments involve using labels to track high-frequency offsets for patch-matching. For example, a processor identifies an offset between a first source image patch and a first target image patch. If the first source image patch and the first target image patch are sufficiently similar, the processor updates a data structure to include a label specifying the offset. The processor associates, via the data structure, the first source image patch with the label. The processor subsequently selects certain high-frequency offsets, including the identified offset, from frequently occurring offsets in the data structure. The processor uses these offsets to identify a second target image patch, which is located at the identified offset from a second source image patch. The processor associates, via the data structure, the second source image patch with the identified offset based on a sufficient similarity between the second source image patch and the second target image patch.
Abstract:
This disclosure relates to generating a bump map and/or a normal map from an image. For example, a method for generating a bump map includes receiving a texture image and a plurality of user-specified weights. The method further includes deriving a plurality of images from the texture image, the plurality of images vary from one another with respect to resolution or sharpness. The method further includes weighting individual images of the plurality of images according to the user-specified weights. The method further includes generating a bump map using the weighted individual images. The method further includes providing an image for display with texture added to a surface of an object in the image based on the bump map.
Abstract:
Image editing techniques are disclosed that support a number of physically-based image editing tasks, including object insertion and relighting. The techniques can be implemented, for example in an image editing application that is executable on a computing system. In one such embodiment, the editing application is configured to compute a scene from a single image, by automatically estimating dense depth and diffuse reflectance, which respectively form the geometry and surface materials of the scene. Sources of illumination are then inferred, conditioned on the estimated scene geometry and surface materials and without any user input, to form a complete 3D physical scene model corresponding to the image. The scene model may include estimates of the geometry, illumination, and material properties represented in the scene, and various camera parameters. Using this scene model, objects can be readily inserted and composited into the input image with realistic lighting, shadowing, and perspective.