Abstract:
An electronic device displays a control user interface that includes a plurality of control affordances. The device detects an input by a contact at a location on the touch-sensitive surface that corresponds to a control affordance, of the plurality of control affordances, on the display. In response to detecting the input, when a characteristic intensity of the contact does not meet an intensity threshold, the device toggles a function of a control that corresponds to the control affordance; and when the characteristic intensity of the contact meets the intensity threshold, the device displays modification options for the control that correspond to the control affordance. While displaying the modification options, the device detects a second input that activates a modification option of the modification options. The device modifies the control that corresponds to the control affordance in accordance with the activated modification option.
Abstract:
An electronic device with a touch-sensitive surface, a display, and one or more sensors to detect intensity of contacts: displays a first user interface that includes objects of a first type and objects of a second type; detects a first portion of a first input that includes an increase in characteristic intensity of a first contact above an intensity threshold while a focus selector is over a respective user interface object; in response, displays supplemental information associated with the respective user interface object; while displaying the supplemental information, detects an end of the first input; and, in response: if the respective user interface object is the first type of object, ceases to display the supplemental information; and, if the respective user interface object is the second type of object, maintains display of the supplemental information after detecting the end of the first input.
Abstract:
Embodiments of the present disclosure provide a system and method for providing an output for an electronic device. In certain embodiments, an alert is output in accordance with a current alert mode, which are selected based on one or more environmental conditions. The environmental conditions may be detected using one or more environmental sensors. The alert can optionally include one or more of: an audio component, a haptic component and a visual component. One or more of alert components correspond to an aspect of the environmental condition detected by the one or more environmental sensors.
Abstract:
An electronic device having a display generation component and one or more input devices displays a representation of a first plurality notifications, including a first notification with first content and a second notification with second content, wherein the representation of the first plurality of notifications includes a summary of the first content and the second content. While displaying the representation of the first plurality of notifications that includes the summary of the first content and the second content, the electronic device detects a first input directed to the representation of the first plurality of notifications, and in response, displays respective notifications from the first plurality of notifications, including displaying the first notification and the second notification individually.
Abstract:
Techniques for displaying relevant user interface objects when a device is placed into viewing position are disclosed. The device can update its display in response to a user approaching a vehicle. Display updates can be based on an arrangement of user interface information for unlocking the vehicle.
Abstract:
A computer system displays a first view of a three-dimensional environment, including a first and second user interface objects at distinct first and second positions. While displaying the first view, the computer system detects a first gaze input directed to a first region in the three-dimensional environment corresponding to the first position. While detecting the first gaze input, the computer system detects first movement of a hand meeting first gesture criteria. In accordance with a determination that the first movement is detected after the first gaze input meets first gaze criteria requiring the gaze to be held at the first region for at least a first preset extended amount of time, the computer system selects the first user interface object; and in accordance with a determination that the first movement is detected before the gaze meeting the first gaze criteria, the computer system forgoes selecting the first user interface object.
Abstract:
Techniques for managing alerts are described. One or more alerts are received at the electronic device. In some embodiments, the device determines how notifications corresponding to the alerts should be output to the user.
Abstract:
A computer system displays a first view of a user interface of a first application with a first size at a first position corresponding to a location of at least a portion of a palm that is currently facing a viewpoint corresponding to a view of a three-dimensional environment provided via a display generation component. While displaying the first view, the computer system detects a first input that corresponds to a request to transfer display of the first application from the palm to a first surface that is within a first proximity of the viewpoint. In response to detecting the first input, the computer system displays a second view of the user interface of the first application with a second size and an orientation that corresponds to the first surface at a second position defined by the first surface.
Abstract:
A computer system displays a first and second user interface object in a three-dimensional environment. The first and second user interface objects have a first and second spatial relationship to a first and second anchor position corresponding to a location of a user's hand in a physical environment, respectively. While displaying the first and second user interface objects in the three-dimensional environment, the computer system detects movement of the user's hand in the physical environment, corresponding to a translational movement and a rotational movement of the user's hand relative to a viewpoint, and in response, translates the first and second user interface objects relative to the viewpoint in accordance with the translational movement of the user's hand, and rotates the first user interface object relative to the viewpoint in accordance with the rotational movement of the user's hand without rotating the second user interface object.
Abstract:
An application launching user interface that includes a plurality of application icons for launching corresponding applications is displayed. A first touch input is detected on a first application icon of the plurality of application icons. The first application icon is for launching a first application that is associated with one or more corresponding quick actions. If the first touch input meets one or more application-launch criteria which require that the first touch input has ended without having met a first input threshold, the first application is launched in response to the first touch input. If the first touch input meets one or more quick-action-display criteria which require that the first touch input meets the first input threshold, one or more quick action objects associated with the first application are concurrently displayed along with the first application icon without launching the first application, in response to the first touch input.