Abstract:
This disclosure describes techniques for performing automatic synthetic shallow depth of field (SDOF) or so-called “portrait mode” suggestions for digital image capture. In particular, these techniques aim to solve the problem of letting users of digital image capture devices know when to turn on portrait image capture modes to capture aesthetically pleasing images. If, based on the application certain criteria, a currently-composed scene is detected to be “portrait worthy,” an icon or other indicator may be provided, e.g., on a user interface of the digital image capture device. According to some embodiments, once a scene has been determined to be portrait worthy, the digital image capture device may capture a subsequent image(s) with at least one additional data asset needed to render a captured image in a synthetic SDOF or portrait mode. This will allow users to convert their digital images into portrait mode images via digital image post-processing operations.
Abstract:
In one embodiment, a method includes: obtaining a first image of a scene while an illumination component is set to an inactive state; obtaining a second image of the scene while the illumination component is set to a pre-flash state; determining one or more illumination control parameters for the illumination component for a third image of the scene that satisfy a foreground-background balance criterion based on a function of the first and second images in order to discriminate foreground data from background data within the scene; and obtaining the third image of the scene while the illumination component is set to an active state in accordance with the one or more illumination control parameters.
Abstract:
In one embodiment, a method includes: obtaining a first image of a scene while an illumination component is set to an inactive state; obtaining a second image of the scene while the illumination component is set to a pre-flash state; determining one or more illumination control parameters for the illumination component for a third image of the scene that satisfy a foreground-background balance criterion based on a function of the first and second images in order to discriminate foreground data from background data within the scene; and obtaining the third image of the scene while the illumination component is set to an active state in accordance with the one or more illumination control parameters.
Abstract:
Techniques are disclosed to provide user control over the manipulation of a digital image. The disclosed techniques enable a user to apply various textures that mimic traditional artistic media to a selected image. User selection of a texture level results in the blending of texturized versions of the selected image in accordance with the selected texture level. User selection of a color level results in the adjustment of color properties of the selected image that are included in the output image. Control of the image selection, texture type selection, texture level selection, and color level selection may be provided through an intuitive graphical user interface.
Abstract:
In one embodiment, a method includes: obtaining a first image of a scene while an illumination component is set to an inactive state; obtaining a second image of the scene while the illumination component is set to a pre-flash state; determining one or more illumination control parameters for the illumination component for a third image of the scene that satisfy a foreground-background balance criterion based on a function of the first and second images in order to discriminate foreground data from background data within the scene; and obtaining the third image of the scene while the illumination component is set to an active state in accordance with the one or more illumination control parameters.
Abstract:
A method and system for providing a dynamic grain effect tool for a media-editing application that generates a grain effect and applies the grain effect to a digital image. The application first generates a random pixel field for the image based on a seed value. The application then generates a film grain pattern for the image by consecutively applying a blurring function and an unsharp masking function, based on an ISO value, to the randomly generated pixel field. The application then blends the grain field with the original image by adjusting each pixel based on the value of the corresponding pixel location in the grain field. The application then adjusts the grain amount in the previously generated full-grain image by receiving a grain amount value from a user and applying this value to the full-grain image.
Abstract:
Some embodiments provide several on-image tools of image editing application for applying effects to an image. Some on-image tools are visible to the user and are overlaid on the image. Some on-image tools are not visible. The tools are for receiving a selection of a location of the image and for applying effects to at least an area of the image that does not include the location of the image.
Abstract:
In one embodiment, a method includes: obtaining a first image of a scene while an illumination component is set to an inactive state; obtaining a second image of the scene while the illumination component is set to a pre-flash state; determining one or more illumination control parameters for the illumination component for a third image of the scene that satisfy a foreground-background balance criterion based on a function of the first and second images in order to discriminate foreground data from background data within the scene; and obtaining the third image of the scene while the illumination component is set to an active state in accordance with the one or more illumination control parameters.
Abstract:
A method and system for providing a dynamic grain effect tool for a media-editing application that generates a grain effect and applies the grain effect to a digital image. The application first generates a random pixel field for the image based on a seed value. The application then generates a film grain pattern for the image by consecutively applying a blurring function and an unsharp masking function, based on an ISO value, to the randomly generated pixel field. The application then blends the grain field with the original image by adjusting each pixel based on the value of the corresponding pixel location in the grain field. The application then adjusts the grain amount in the previously generated full-grain image by receiving a grain amount value from a user and applying this value to the full-grain image.
Abstract:
Techniques are disclosed to provide user control over the manipulation of a digital image. The disclosed techniques enable a user to apply various textures that mimic traditional artistic media to a selected image. User selection of a texture level results in the blending of texturized versions of the selected image in accordance with the selected texture level. User selection of a color level results in the adjustment of color properties of the selected image that are included in the output image. Control of the image selection, texture type selection, texture level selection, and color level selection may be provided through an intuitive graphical user interface.