Abstract:
A generative network may be learned in an adversarial setting with a goal of modifying synthetic data such that a discriminative network may not be able to reliably tell the difference between refined synthetic data and real data. The generative network and discriminative network may work together to learn how to produce more realistic synthetic data with reduced computational cost. The generative network may iteratively learn a function that synthetic data with a goal of generating refined synthetic data that is more difficult for the discriminative network to differentiate from real data, while the discriminative network may be configured to iteratively learn a function that classifies data as either synthetic or real. Over multiple iterations, the generative network may learn to refine the synthetic data to produce refined synthetic data on which other machine learning models may be trained.
Abstract:
A method includes displaying a first set of text content characterized by a first difficulty level. The method includes obtaining speech data associated with the first set of text content. The method includes determining linguistic feature(s) within the speech data. The method includes in response to completion of the speech data, determining a reading proficiency value associated with the first set of text content and based on the linguistic feature(s). The method includes in accordance with determining the reading proficiency value satisfies change criteria, changing a difficulty level for a second set of text content. After changing the difficulty level, the second set of text content corresponds to a second difficulty level different from the first difficulty level. The method includes in accordance with determining the reading proficiency value does not satisfy the change criteria, maintaining the second set of text content at the first difficulty level.
Abstract:
A method and system for providing a dynamic grain effect tool for a media-editing application that generates a grain effect and applies the grain effect to a digital image. The application first generates a random pixel field for the image based on a seed value. The application then generates a film grain pattern for the image by consecutively applying a blurring function and an unsharp masking function, based on an ISO value, to the randomly generated pixel field. The application then blends the grain field with the original image by adjusting each pixel based on the value of the corresponding pixel location in the grain field. The application then adjusts the grain amount in the previously generated full-grain image by receiving a grain amount value from a user and applying this value to the full-grain image.
Abstract:
A method and system for controlling multiple image editing controls using one master control. The system identifies various characteristics of an image being edited. The system generates, for each of multiple image editing controls, a relationship between the master control and the image editing control. The relationship is based on at least one of the identified characteristics of the image being edited. The relationship is different for different images when the different images have different characteristics, such as different average color component values at a particular percentile of pixels in the images.
Abstract:
Some embodiments provide a novel user interface (UI) tool that is a unified slider control, which includes multiple sliders that slide along a region. The region is a straight line in some embodiments, while it is an angular arc in other embodiments. In some embodiments, the unified slider control is used in a media editing application to allow a user to modify several different properties of the image by moving several different sliders along the region. Each slider is associated with a property of the image. A position of the slider in the region corresponds to a value of the property associated with the slider.
Abstract:
Some embodiments provide a method that provides a graphical user interface (GUI) for color balancing an image. The method provides a display area for displaying the image. The method provides several color balance modes. The method provides a user interface (UI) control associated with a color balance mode in the several color balance modes. The UI control performs a color balance operation on the image by (1) identifying a color cast in the image and (2) modifying pixels in the image based on the pixels' luminance values in order to reduce the color cast in the image.
Abstract:
Some embodiments provide a novel method for tempering an adjustment of an image to account for prior adjustments to the image. The adjustment in some embodiments is an automatic exposure adjustment. The method performs an operation for a first adjustment on a first set of parameters (e.g., saturation, sharpness, luminance). The method compares the first set of parameters to a second set of parameters to produce a third set of parameters that expresses the difference between the first adjustment and a second adjustment. The method performs a third operation to produce an adjusted image. The first set of parameters quantify a set of prior adjustments to the image by an image capturing device when the image was captured in some embodiments. The second set of parameters is a set of target parameters. The third set of parameters specify the tempered adjustment of the image.
Abstract:
A method includes obtaining a speech proficiency value indicator indicative of a speech proficiency value associated with a user of the electronic device. The method further includes in response to determining that the speech proficiency value satisfies a threshold proficiency value: displaying training text via the display device; obtaining, from the audio sensor, speech data associated with the training text, wherein the speech data is characterized by the speech proficiency value; determining, using a speech classifier, one or more speech characterization vectors for the speech data based on linguistic features within the speech data; and adjusting one or more operational values of the speech classifier based on the one or more speech characterization vectors and the speech proficiency value.
Abstract:
A method includes obtaining user input interaction data. The user input interaction data includes one or more user interaction input values respectively obtained from the corresponding one or more input devices. The user input interaction data includes a word combination. The method includes generating a user interaction-style indicator value corresponding to the word combination in the user input interaction data. The user interaction-style indicator value is a function of the word combination and a portion of the one or more user interaction input values. The method includes determining, using a semantic text analyzer, a semantic assessment of the word combination in the user input interaction data based on the user interaction-style indicator value and a natural language assessment of the word combination. The method includes generating a response to the user input interaction data according to the user interaction-style indicator value and the semantic assessment of the word combination.