Abstract:
While displaying playback of a first portion of a video in a video playback region, a device receives a request to add a first annotation to the video playback. In response to receiving the request, the device pauses playback of the video at a first position in the video and displays a still image that corresponds to the first, paused position of the video. While displaying the still image, the device receives the first annotation on a first portion of a physical environment captured in the still image. After receiving the first annotation, the device displays, in the video playback region, a second portion of the video that corresponds to a second position in the video, where the first portion of the physical environment is captured in the second portion of the video and the first annotation is displayed in the second portion of the video.
Abstract:
A computer system displays a first previously captured media object including one or more first images, wherein the first previously captured media object was recorded and stored with first depth data corresponding to a first physical environment captured in each of the one or more first images. In response to a first user request to add a first virtual object to the first previously captured media object, the computer system displays the first virtual object over at least a portion of a respective image in the first previously captured media object, wherein the first virtual object is displayed with at least a first position or orientation that is determined using the first depth data that corresponds to the respective image in the first previously captured media object.
Abstract:
A computer system while displaying an augmented reality environment, concurrently displays: a representation of at least a portion of a field of view of one or more cameras that includes a physical object, and a virtual user interface object at a location in the representation of the field of view, where the location is determined based on the respective physical object in the field of view. While displaying the augmented reality environment, in response to detecting an input that changes a virtual environment setting for the augmented reality environment, the computer system adjusts an appearance of the virtual user interface object in accordance with the change made to the virtual environment setting and applies to at least a portion of the representation of the field of view a filter selected based on the change made to the virtual environment setting.
Abstract:
A computer system captures, via one or more cameras, information indicative of the physical environment, including respective portions of the physical environment that are in a field of view. The respective portions of the physical environment include a plurality of primary features of the physical environment and secondary features of the physical environment. After capturing the information indicative of the physical environment, the system displays a user interface, including concurrently displaying graphical representations of the plurality of primary features that are generated with a first level of fidelity to the corresponding plurality of primary features of the physical environment, and one or more graphical representations of secondary features that are generated with a second level of fidelity to the corresponding one or more secondary features of the physical environment, where the second level of fidelity is lower than the first level of fidelity.
Abstract:
A computer system displays a representation of a field of view of one or more cameras that is updated with changes in the field of view. In response to a request to add an annotation, the representation of the field of view of the camera(s) is replaced with a still image of the field of view of the camera(s). An annotation is received on a portion of the still image that corresponds to a portion of a physical environment captured in the still image. The still image is replaced with the representation of the field of view of the camera(s). An indication of a current spatial relationship of the camera(s) relative to the portion of the physical environment is displayed or not displayed based on a determination of whether the portion of the physical environment captured in the still image is currently within the field of view of the camera(s).
Abstract:
A computer system concurrently displays, in an augmented reality environment, a representation of at least a portion of a field of view of one or more cameras that includes a respective physical object, which is updated as contents of the field of view change; and a respective virtual user interface object, at a respective location in the virtual user interface determined based on the location of the respective physical object in the field of view. While detecting an input at a location that corresponds to the displayed respective virtual user interface object, in response to detecting movement of the input relative to the respective physical object in the field of view of the one or more cameras, the system adjusts an appearance of the respective virtual user interface object in accordance with a magnitude of movement of the input relative to the respective physical object.
Abstract:
An electronic device: displays an electronic document; while displaying the electronic document, detects a first input from a stylus, including detecting an initial contact by the stylus on a touch-sensitive surface; determines a plurality of characteristics of the first input, including a tilt of the stylus; in accordance with a determination that the tilt meets one or more selection criteria for a first virtual drawing implement, selects the first virtual drawing implement for the stylus to emulate; in accordance with a determination that the tilt meets one or more selection criteria for a second virtual drawing implement, selects the second virtual drawing implement for the stylus to emulate; and, after selecting one of the first virtual drawing implement and the second virtual drawing implement for the stylus to emulate, generates a mark in the electronic document with the selected virtual drawing implement in response to detecting the first input.
Abstract:
An electronic device: displays an electronic document; while displaying the electronic document, detects a first input from a stylus, including detecting an initial contact by the stylus on a touch-sensitive surface; determines a plurality of characteristics of the first input, including a tilt of the stylus; in accordance with a determination that the tilt meets one or more selection criteria for a first virtual drawing implement, selects the first virtual drawing implement for the stylus to emulate; in accordance with a determination that the tilt meets one or more selection criteria for a second virtual drawing implement, selects the second virtual drawing implement for the stylus to emulate; and, after selecting one of the first virtual drawing implement and the second virtual drawing implement for the stylus to emulate, generates a mark in the electronic document with the selected virtual drawing implement in response to detecting the first input.
Abstract:
An electronic device with a touch-sensitive display and one or more sensors to detect signals from a stylus associated with the device: detects a positional state of the stylus, the positional state of the stylus corresponding to a distance, a tilt, and/or an orientation of the stylus relative to the touch-sensitive display; determines a location on the touch-sensitive display that corresponds to the detected positional state of the stylus; displays, in accordance with the positional state of the stylus, an indication on the touch-sensitive display of the determined location prior to the stylus touching the touch-sensitive display; detects a change in the distance, the tilt, and/or the orientation of the stylus, prior to the stylus touching the touch-sensitive display; and in response to detecting the change, updates the displayed indication on the touch-sensitive display.
Abstract:
A device displays a first user interface that includes a first user interface object at a first location in the first user interface and detects a first portion of a first input directed to the first user interface object. In response, if the first portion of the first input meets first criteria including a first input threshold, the device displays selectable options that correspond to the first user interface object; and, if the first portion of the first input meets the first criteria and meets second criteria that require first movement, the device ceases to display the selectable options and moves the first user interface object or a representation thereof to a second location in accordance with the first movement.