Abstract:
The present disclosure generally relates to displaying and editing image with depth information. Image data associated with an image includes depth information associated with a subject. In response to a request to display the image, a first modified image is displayed. Displaying the first modified image includes displaying, based on the depth information, a first level of simulated lighting on a first portion of the subject and a second level of simulated lighting on a second portion of the subject. After displaying the first modified image, a second modified image is displayed. Displaying the second modified image includes displaying, based on the depth information, a third level of simulated lighting on the first portion of the subject and a fourth level of simulated lighting on the second portion of the subject.
Abstract:
A computer system displays a first previously captured media object including one or more first images, wherein the first previously captured media object was recorded and stored with first depth data corresponding to a first physical environment captured in each of the one or more first images. In response to a first user request to add a first virtual object to the first previously captured media object, the computer system displays the first virtual object over at least a portion of a respective image in the first previously captured media object, wherein the first virtual object is displayed with at least a first position or orientation that is determined using the first depth data that corresponds to the respective image in the first previously captured media object.
Abstract:
A computer system while displaying an augmented reality environment, concurrently displays: a representation of at least a portion of a field of view of one or more cameras that includes a physical object, and a virtual user interface object at a location in the representation of the field of view, where the location is determined based on the respective physical object in the field of view. While displaying the augmented reality environment, in response to detecting an input that changes a virtual environment setting for the augmented reality environment, the computer system adjusts an appearance of the virtual user interface object in accordance with the change made to the virtual environment setting and applies to at least a portion of the representation of the field of view a filter selected based on the change made to the virtual environment setting.
Abstract:
A computer system displays a representation of a field of view of one or more cameras that is updated with changes in the field of view. In response to a request to add an annotation, the representation of the field of view of the camera(s) is replaced with a still image of the field of view of the camera(s). An annotation is received on a portion of the still image that corresponds to a portion of a physical environment captured in the still image. The still image is replaced with the representation of the field of view of the camera(s). An indication of a current spatial relationship of the camera(s) relative to the portion of the physical environment is displayed or not displayed based on a determination of whether the portion of the physical environment captured in the still image is currently within the field of view of the camera(s).
Abstract:
An event can be detected by an input device. The event may be determined to be a triggering event by comparing the event to a group of triggering events. A first prediction model corresponding to the event is then selected. Contextual information about the device specifying one or more properties of the computing device in a first context is then received, and a set of one or more applications is identified. The set of one or more applications may have at least a threshold probability of being accessed by the user when the event occurs in the first context. Thereafter, a user interface is provided to a user for interacting with the set of one or more applications.