Abstract:
Selection of an area of an image can be received. Selection of a subset of a plurality of predefined patterns may be received. A plurality of patterns can be generated. At least one generated pattern in the plurality of patterns may be based at least in part on one or more predefined patterns in the subset. Selection of another subset of patterns may be received. At least one pattern in the other subset of patterns may be selected from the plurality of predefined patterns and/or the generated patterns. Another plurality of patterns can be generated. At least one generated pattern in this plurality of patterns may be based at least on part on one or more patterns in the other subset. Selection of a generated pattern from the generated other plurality of patterns may be received. The selected area of the image may be populated with the selected generated pattern.
Abstract:
This document describes techniques and apparatuses for 3D printing with small geometric offsets to affect surface characteristics. These techniques are capable of enabling fused-deposition printers to create 3D objects having desired surface characteristics, such as particular colors, images and image resolutions, textures, and luminosities. In some cases, the techniques do so using a single filament head with a single filament material. In some other cases, the techniques do so using multiple heads each with different filaments, though the techniques can forgo many switches between these heads. Each printing layer may use even a single filament from one head, thereby enabling surface characteristics while reducing starts and stops for filaments heads, which enables fewer artifacts or increases printing speed.
Abstract:
Selection of an area of an image can be received. Selection of a subset of a plurality of predefined patterns may be received. A plurality of patterns can be generated. At least one generated pattern in the plurality of patterns may be based at least in part on one or more predefined patterns in the subset. Selection of another subset of patterns may be received. At least one pattern in the other subset of patterns may be selected from the plurality of predefined patterns and/or the generated patterns. Another plurality of patterns can be generated. At least one generated pattern in this plurality of patterns may be based at least on part on one or more patterns in the other subset. Selection of a generated pattern from the generated other plurality of patterns may be received. The selected area of the image may be populated with the selected generated pattern.
Abstract:
Various embodiments describe facilitating real-time crops on an image. In an example, an image processing application executed on a device receives image data corresponding to a field of view of a camera of the device. The image processing application renders a major view on a display of the device in a preview mode. The major view presents a previewed image based on the image data. The image processing application receives a composition score of a cropped image from a deep-learning system. The image processing application renders a sub-view presenting the cropped image based on the composition score in a preview mode. Based on a user interaction, the image processing application renders the cropped image in the major view with the sub-view in the preview mode.
Abstract:
Digital image defect identification and correction techniques are described. In one example, a digital medium environment is configured to identify and correct a digital image defect through identification of a defect in a digital image using machine learning. The identification includes generating a plurality of defect type scores using a plurality of defect type identification models, as part of machine learning, for a plurality of different defect types and determining the digital image includes the defect based on the generated plurality of defect type scores. A correction is generated for the identified defect and the digital image is output as included the generated correction.
Abstract:
Neural network patch aggregation and statistical techniques are described. In one or more implementations, patches are generated from an image, e.g., randomly, and used to train a neural network. An aggregation of outputs of patches processed by the neural network may be used to label an image using an image descriptor, such as to label aesthetics of the image, classify the image, and so on. In another example, the patches may be used by the neural network to calculate statistics describing the patches, such as to describe statistics such as minimum, maximum, median, and average of activations of image characteristics of the individual patches. These statistics may also be used to support a variety of functionality, such as to label the image as described above.
Abstract:
In embodiments of event image curation, a computing device includes memory that stores a collection of digital images associated with a type of event, such as a digital photo album of digital photos associated with the event, or a video of image frames and the video is associated with the event. A curation application implements a convolutional neural network, which receives the digital images and a designation of the type of event. The convolutional neural network can then determine an importance rating of each digital image within the collection of the digital images based on the type of the event. The importance rating of a digital image is representative of an importance of the digital image to a person in context of the type of the event. The convolutional neural network generates an output of representative digital images from the collection based on the importance rating of each digital image.
Abstract:
This document describes techniques and apparatuses for smooth 3D printing using multi-stage filaments. These techniques are capable of creating smoother surfaces than many current techniques. In some cases, the techniques determine a portion of a surface of a 3D object that includes, or will include, a printing artifact or is otherwise not smooth, and then applies multi-stage filaments to provide a smoothing surface over that portion.
Abstract:
Image zooming is described. In one or more implementations, zoomed croppings of an image are scored. The scores calculated for the zoomed croppings are indicative of a zoomed cropping's inclusion of content that is captured in the image. For example, the scores are indicative of a degree to which a zoomed cropping includes salient content of the image, a degree to which the salient content included in the zoomed cropping is centered in the image, and a degree to which the zoomed cropping preserves specified regions-to-keep and excludes specified regions-to-remove. Based on the scores, at least one zoomed cropping may be chosen to effectuate a zooming of the image. Accordingly, the image may be zoomed according to the zoomed cropping such that an amount the image is zoomed corresponds to a scale of the zoomed cropping.
Abstract:
Methods and systems are directed to improving the convenience of drawing applications. Some examples include generating 3D drawing objects using a drawing application and selecting one based on a 2D design (in some cases a hand-drawn sketch) provided by a user. The user provided 2D design is separated into an outline perimeter and interior design, and corresponding vectors are then generated. These vectors are then used with analogous vectors generated for drawing objects. The selection of a drawing object to correspond to the 2D design is based on finding a drawing object having a minimum difference between its vectors and the vectors of the 2D design. The selected drawing object is then used to generate a drawing object configured to receive edits from the user. This reduces the inconvenience required to manually reproduce the 2D design in the drawing application.