Abstract:
Selection of an area of an image can be received. Selection of a subset of a plurality of predefined patterns may be received. A plurality of patterns can be generated. At least one generated pattern in the plurality of patterns may be based at least in part on one or more predefined patterns in the subset. Selection of another subset of patterns may be received. At least one pattern in the other subset of patterns may be selected from the plurality of predefined patterns and/or the generated patterns. Another plurality of patterns can be generated. At least one generated pattern in this plurality of patterns may be based at least on part on one or more patterns in the other subset. Selection of a generated pattern from the generated other plurality of patterns may be received. The selected area of the image may be populated with the selected generated pattern.
Abstract:
This document describes techniques and apparatuses for 3D printing with small geometric offsets to affect surface characteristics. These techniques are capable of enabling fused-deposition printers to create 3D objects having desired surface characteristics, such as particular colors, images and image resolutions, textures, and luminosities. In some cases, the techniques do so using a single filament head with a single filament material. In some other cases, the techniques do so using multiple heads each with different filaments, though the techniques can forgo many switches between these heads. Each printing layer may use even a single filament from one head, thereby enabling surface characteristics while reducing starts and stops for filaments heads, which enables fewer artifacts or increases printing speed.
Abstract:
Cropping boundary simplicity techniques are described. In one or more implementations, multiple candidate cropping s of a scene are generated. For each of the candidate croppings, a score is calculated that is indicative of a boundary simplicity for the candidate cropping. To calculate the boundary simplicity, complexity of the scene along a boundary of a respective candidate cropping is measured. The complexity is measured, for instance, using an average gradient, an image edge map, or entropy along the boundary. Values indicative of the complexity may be derived from the measuring. The candidate croppings may then be ranked according to those values. Based on the scores calculated to indicate the boundary simplicity, one or more of the candidate croppings may be chosen e.g., to present the chosen croppings to a user for selection.
Abstract:
Image cropping suggestion using multiple saliency maps is described. In one or more implementations, component scores, indicative of visual characteristics established for visually-pleasing croppings, are computed for candidate image croppings using multiple different saliency maps. The visual characteristics on which a candidate image cropping is scored may be indicative of its composition quality, an extent to which it preserves content appearing in the scene, and a simplicity of its boundary. Based on the component scores, the croppings may be ranked with regard to each of the visual characteristics. The rankings may be used to cluster the candidate croppings into groups of similar croppings, such that croppings in a group are different by less than a threshold amount and croppings in different groups are different by at least the threshold amount. Based on the clustering, croppings may then be chosen, e.g., to present them to a user for selection.
Abstract:
In techniques for category histogram image representation, image segments of an input image are generated and bounding boxes are selected that each represent a region of the input image, where each of the bounding boxes include image segments of the input image. A saliency map of the input image can also be generated. A bounding box is applied as a query on an images database to determine database image regions that match the region of the input image represented by the bounding box. The query can be augmented based on saliency detection of the input image region that is represented by the bounding box, and a query result is a ranked list of the database image regions. A category histogram for the region of the input image is then generated based on category labels of each of the database image regions that match the input image region.
Abstract:
This document describes techniques and apparatuses for offset three-dimensional (3D) printing. These techniques are capable of creating smoother surfaces and more-accurate structures than many current techniques. In some cases, the techniques provide a first stage of filaments separated by offsets and, at a second stage, provide filaments over these offsets. In so doing, filaments of the second stage partially fill-in these offsets, which can remove steps, increase accuracy, or reduce undesired production artifacts.
Abstract:
Cropping boundary simplicity techniques are described. In one or more implementations, multiple candidate croppings of a scene are generated. For each of the candidate croppings, a score is calculated that is indicative of a boundary simplicity for the candidate cropping. To calculate the boundary simplicity, complexity of the scene along a boundary of a respective candidate cropping is measured. The complexity is measured, for instance, using an average gradient, an image edge map, or entropy along the boundary. Values indicative of the complexity may be derived from the measuring. The candidate croppings may then be ranked according to those values. Based on the scores calculated to indicate the boundary simplicity, one or more of the candidate croppings may be chosen e.g., to present the chosen croppings to a user for selection.
Abstract:
In techniques for image foreground detection, a foreground detection module is implemented to generate varying levels of saliency thresholds from a saliency map of an image that includes foreground regions. The saliency thresholds can be generated based on an adaptive thresholding technique applied to the saliency map of the image and/or based on multi-level segmentation of the saliency map. The foreground detection module applies one or more constraints that distinguish the foreground regions in the image, and detects the foreground regions of the image based on the saliency thresholds and the constraints. Additionally, different ones of the constraints can be applied to detect different ones of the foreground regions, as well as to detect multi-level foreground regions based on the saliency thresholds and the constraints.
Abstract:
Methods and apparatus for three-dimensional (3D) camera positioning using a two-dimensional (2D) vanishing point grid. A vanishing point grid in a scene and initial camera parameters may be obtained. A new 3D camera may be calculated according to the vanishing point grid that places the grid as a ground plane in a scene. A 3D object may then be placed on the ground plane in the scene as defined by the 3D camera. The 3D object may be placed at the center of the vanishing point grid. Once placed, the 3D object can be moved to other locations on the ground plane or otherwise manipulated. The 3D object may be added as a layer in the image.
Abstract:
Various embodiments describe view switching of video on a computing device. In an example, a video processing application executed on the computing device receives a stream of video data. The video processing application renders a major view on a display of the computing device. The major view presents a video from the stream of video data. The video processing application inputs the stream of video data to a deep learning system and receives back information that identifies a cropped video from the video based on a composition score of the cropped video, while the video is presented in the major view. The composition score is generated by the deep learning system. The video processing application renders a sub-view on a display of the device, the sub-view presenting the cropped video. The video processing application renders the cropped video in the major view based on a user interaction with the sub-view.