Abstract:
Font recognition and similarity determination techniques and systems are described. In a first example, localization techniques are described to train a model using machine learning (e.g., a convolutional neural network) using training images. The model is then used to localize text in a subsequently received image, and may do so automatically and without user intervention, e.g., without specifying any of the edges of a bounding box. In a second example, a deep neural network is directly learned as an embedding function of a model that is usable to determine font similarity. In a third example, techniques are described that leverage attributes described in metadata associated with fonts as part of font recognition and similarity determinations.
Abstract:
Techniques for image captioning with word vector representations are described. In implementations, instead of outputting results of caption analysis directly, the framework is adapted to output points in a semantic word vector space. These word vector representations reflect distance values in the context of the semantic word vector space. In this approach, words are mapped into a vector space and the results of caption analysis are expressed as points in the vector space that capture semantics between words. In the vector space, similar concepts with have small distance values. The word vectors are not tied to particular words or a single dictionary. A post-processing step is employed to map the points to words and convert the word vector representations to captions. Accordingly, conversion is delayed to a later stage in the process.
Abstract:
Certain embodiments involve learning features of content items (e.g., images) based on web data and user behavior data. For example, a system determines latent factors from the content items based on data including a user's text query or keyword query for a content item and the user's interaction with the content items based on the query (e.g., a user's click on a content item resulting from a search using the text query). The system uses the latent factors to learn features of the content items. The system uses a previously learned feature of the content items for iterating the process of learning features of the content items to learn additional features of the content items, which improves the accuracy with which the system is used to learn other features of the content items.
Abstract:
Techniques for image captioning with weak supervision are described herein. In implementations, weak supervision data regarding a target image is obtained and utilized to provide detail information that supplements global image concepts derived for image captioning. Weak supervision data refers to noisy data that is not closely curated and may include errors. Given a target image, weak supervision data for visually similar images may be collected from sources of weakly annotated images, such as online social networks. Generally, images posted online include “weak” annotations in the form of tags, titles, labels, and short descriptions added by users. Weak supervision data for the target image is generated by extracting keywords for visually similar images discovered in the different sources. The keywords included in the weak supervision data are then employed to modulate weights applied for probabilistic classifications during image captioning analysis.
Abstract:
Content creation and sharing integration techniques and systems are described. In one or more implementations, techniques are described in which modifiable versions of content (e.g., images) are created and shared via a content sharing service such that image creation functionality used to create the images is preserved to permit continued creation using this functionality. In one or more additional implementations, image creation functionality employed by a creative professional to create content is leveraged to locate similar images from a content sharing service.
Abstract:
Font recognition and similarity determination techniques and systems are described. In a first example, localization techniques are described to train a model using machine learning (e.g., a convolutional neural network) using training images. The model is then used to localize text in a subsequently received image, and may do so automatically and without user intervention, e.g., without specifying any of the edges of a bounding box. In a second example, a deep neural network is directly learned as an embedding function of a model that is usable to determine font similarity. In a third example, techniques are described that leverage attributes described in metadata associated with fonts as part of font recognition and similarity determinations.
Abstract:
A convolutional neural network (CNN) is trained for font recognition and font similarity learning. In a training phase, text images with font labels are synthesized by introducing variances to minimize the gap between the training images and real-world text images. Training images are generated and input into the CNN. The output is fed into an N-way softmax function dependent on the number of fonts the CNN is being trained on, producing a distribution of classified text images over N class labels. In a testing phase, each test image is normalized in height and squeezed in aspect ratio resulting in a plurality of test patches. The CNN averages the probabilities of each test patch belonging to a set of fonts to obtain a classification. Feature representations may be extracted and utilized to define font similarity between fonts, which may be utilized in font suggestion, font browsing, or font recognition applications.
Abstract:
Image editing techniques are disclosed that support a number of physically-based image editing tasks, including object insertion and relighting. The techniques can be implemented, for example in an image editing application that is executable on a computing system. In one such embodiment, the editing application is configured to compute a scene from a single image, by automatically estimating dense depth and diffuse reflectance, which respectively form the geometry and surface materials of the scene. Sources of illumination are then inferred, conditioned on the estimated scene geometry and surface materials and without any user input, to form a complete 3D physical scene model corresponding to the image. The scene model may include estimates of the geometry, illumination, and material properties represented in the scene, and various camera parameters. Using this scene model, objects can be readily inserted and composited into the input image with realistic lighting, shadowing, and perspective.
Abstract:
In techniques for video denoising using optical flow, image frames of video content include noise that corrupts the video content. A reference frame is selected, and matching patches to an image patch in the reference frame are determined from within the reference frame. A noise estimate is computed for previous and subsequent image frames relative to the reference frame. The noise estimate for an image frame is computed based on optical flow, and is usable to determine a contribution of similar motion patches to denoise the image patch in the reference frame. The similar motion patches from the previous and subsequent image frames that correspond to the image patch in the reference frame are determined based on the optical flow computations. The image patch is denoised based on an average of the matching patches from reference frame and the similar motion patches determined from the previous and subsequent image frames.
Abstract:
A non-keyframe reconstruction technique is described for selecting and reconstructing keyframes that have not yet been included in a reconstruction of an input image sequence to provide a better reconstruction in a structure from motion (SFM) technique. The technique may, for example, be used in an adaptive reconstruction algorithm implemented by a general SFM technique. This technique may add and reconstruct non-keyframes to a set of keyframes already generated by an initialization technique and reconstructed by adaptive and optimization techniques for iteratively selecting and reconstructing additional keyframes. Camera motion and intrinsic parameters may be computed for non-keyframes by optimizing a cost function. Output of the non-keyframe reconstruction technique may include at least camera intrinsic parameters and Euclidean motion parameters for the images in the input image sequence.