Abstract:
A computer system concurrently displays, in an augmented reality environment, a representation of at least a portion of a field of view of one or more cameras that includes a respective physical object, which is updated as contents of the field of view change; and a respective virtual user interface object, at a respective location in the virtual user interface determined based on the location of the respective physical object in the field of view. While detecting an input at a location that corresponds to the displayed respective virtual user interface object, in response to detecting movement of the input relative to the respective physical object in the field of view of the one or more cameras, the system adjusts an appearance of the respective virtual user interface object in accordance with a magnitude of movement of the input relative to the respective physical object.
Abstract:
An electronic device includes a touch-sensitive surface, a display, and a camera sensor. The device displays a message region for displaying a message conversation and receives a request to add media to the message conversation. Responsive to receiving the request, the device displays a media selection interface concurrently with at least a portion of the message conversation. The media selection interface includes a plurality of affordances for selecting media for addition to the message conversation, the plurality of affordances includes a live preview affordance, at least a subset of the plurality of affordances includes thumbnail representations of media available for adding to the message conversation, and the live preview affordance is associated with a live camera preview. Responsive to detecting selection of the live preview affordance, the device captures a new image based on the live camera preview and selects the new image for addition to the message conversation.
Abstract:
An event can be detected by an input device. The event may be determined to be a triggering event by comparing the event to a group of triggering events. A first prediction model corresponding to the event is then selected. Contextual information about the device specifying one or more properties of the computing device in a first context is then received, and a set of one or more applications is identified. The set of one or more applications may have at least a threshold probability of being accessed by the user when the event occurs in the first context. Thereafter, a user interface is provided to a user for interacting with the set of one or more applications.
Abstract:
Systems and processes for operating an intelligent automated assistant are provided. In one example process, a speech input is received from a user. In response to determining that the speech input corresponds to a user intent of obtaining information associated with a user experience of the user, one or more parameters referencing a user experience of the user are identified. Metadata associated with the referenced user experience is obtained from an experiential data structure. Based on the metadata, one or more media items associated with the referenced are retrieved based on the metadata. The one or more media items associated with the referenced user experience are output together.
Abstract:
The present disclosure generally relates to displaying and editing image with depth information. Image data associated with an image includes depth information associated with a subject. In response to a request to display the image, a first modified image is displayed. Displaying the first modified image includes displaying, based on the depth information, a first level of simulated lighting on a first portion of the subject and a second level of simulated lighting on a second portion of the subject. After displaying the first modified image, a second modified image is displayed. Displaying the second modified image includes displaying, based on the depth information, a third level of simulated lighting on the first portion of the subject and a fourth level of simulated lighting on the second portion of the subject.
Abstract:
While displaying playback of a first portion of a video in a video playback region, a device receives a request to add a first annotation to the video playback. In response to receiving the request, the device pauses playback of the video at a first position in the video and displays a still image that corresponds to the first, paused position of the video. While displaying the still image, the device receives the first annotation on a first portion of a physical environment captured in the still image. After receiving the first annotation, the device displays, in the video playback region, a second portion of the video that corresponds to a second position in the video, where the first portion of the physical environment is captured in the second portion of the video and the first annotation is displayed in the second portion of the video.
Abstract:
A computer system displays a first previously captured media object including one or more first images, wherein the first previously captured media object was recorded and stored with first depth data corresponding to a first physical environment captured in each of the one or more first images. In response to a first user request to add a first virtual object to the first previously captured media object, the computer system displays the first virtual object over at least a portion of a respective image in the first previously captured media object, wherein the first virtual object is displayed with at least a first position or orientation that is determined using the first depth data that corresponds to the respective image in the first previously captured media object.
Abstract:
An event can be detected by an input device. The event may be determined to be a triggering event by comparing the event to a group of triggering events. A first prediction model corresponding to the event is then selected. Contextual information about the device specifying one or more properties of the computing device in a first context is then received, and a set of one or more applications is identified. The set of one or more applications may have at least a threshold probability of being accessed by the user when the event occurs in the first context. Thereafter, a user interface is provided to a user for interacting with the set of one or more applications.
Abstract:
A computer system while displaying an augmented reality environment, concurrently displays: a representation of at least a portion of a field of view of one or more cameras that includes a physical object, and a virtual user interface object at a location in the representation of the field of view, where the location is determined based on the respective physical object in the field of view. While displaying the augmented reality environment, in response to detecting an input that changes a virtual environment setting for the augmented reality environment, the computer system adjusts an appearance of the virtual user interface object in accordance with the change made to the virtual environment setting and applies to at least a portion of the representation of the field of view a filter selected based on the change made to the virtual environment setting.
Abstract:
While displaying playback of a first portion of a video in a video playback region, a device receives a request to add a first annotation to the video playback. In response to receiving the request, the device pauses playback of the video at a first position in the video and displays a still image that corresponds to the first, paused position of the video. While displaying the still image, the device receives the first annotation on a first portion of a physical environment captured in the still image. After receiving the first annotation, the device displays, in the video playback region, a second portion of the video that corresponds to a second position in the video, where the first portion of the physical environment is captured in the second portion of the video and the first annotation is displayed in the second portion of the video.