Abstract:
An electronic device with a display and an embedded fingerprint sensor displays a lock screen on the display. While displaying the lock screen, the electronic device detects a first touch input on the embedded fingerprint sensor. In response to detecting the first touch input on the embedded fingerprint sensor: the electronic device, in accordance with a determination that first timing criteria are met, displays content of a plurality of messages; and the electronic device, in accordance with a determination that second timing criteria, different from the first timing criteria are met, ceases to display the lock screen and displaying a home screen use interface for the electronic device with a plurality of application icons.
Abstract:
A first device sends a request to a second device to initiate a shared annotation session. In response to receiving acceptance of the request, a first prompt to move the first device toward the second device is displayed. In accordance with a determination that connection criteria for the first device and the second device are met, a representation of a field of view of the camera(s) of the first device is displayed in the shared annotation session with the second device. During the shared annotation session, one or more annotations are displayed via the first display generation component and one or more second virtual annotations corresponding to annotation input directed to the respective location in the physical environment by the second device is displayed via the first display generation component, provided that the respective location is included in the field of view of the first set of cameras.
Abstract:
The present disclosure generally relates to embodiments for video communication interface for managing content that is shared during a video communication session.
Abstract:
A first device sends a request to a second device to initiate a shared annotation session. In response to receiving acceptance of the request, a first prompt to move the first device toward the second device is displayed. In accordance with a determination that connection criteria for the first device and the second device are met, a representation of a field of view of the camera(s) of the first device is displayed in the shared annotation session with the second device. During the shared annotation session, one or more annotations are displayed via the first display generation component and one or more second virtual annotations corresponding to annotation input directed to the respective location in the physical environment by the second device is displayed via the first display generation component, provided that the respective location is included in the field of view of the first set of cameras.
Abstract:
An electronic device, with a touch-sensitive surface, displays a respective control, which is associated with respective contact intensity criteria, used to determine whether or not a function associated with the respective control will be performed. The device detects a gesture on the touch-sensitive surface, corresponding to an interaction with the respective control. In accordance with a determination that the gesture does not include a contact that meets the respective contact intensity criteria, the device changes the appearance of the respective control to indicate progress toward meeting the respective contact intensity criteria that is used to determine whether or not a function associated with the respective control will be performed. In response to detecting activation of the control, the device performs the function associated with the respective control in accordance with the detected gesture including a contact that meets the respective contact intensity criteria.
Abstract:
While a device is in an unlocked state, a sequence of one or more activations of a button of the device are detected, where a first activation of the button is detected while a respective application user interface is displayed on the display. In response to detecting the sequence of one or more activations of the button: if the sequence of activations of the button meet first criteria, display of the respective application user interface is replaced with display of a different user interface while maintaining the device in the unlocked state; and if the sequence of activations of the button meet second criteria, the device switches from the unlocked state to a locked state, where the first criteria are differentiated from the second criteria based on a number and/or timing of activations of button in the sequence of activations of the button.
Abstract:
An application can generate multiple user interfaces for display across multiple electronic devices. After the electronic devices establish communication, an application running on at least one of the devices can present a first set of information items on a touch-enabled display of one of the electronic devices. The electronic device can receive a user selection of one of the first set of information items. In response to receiving the user selection, the application can generate a second set of information items for display on the other electronic device. The second set of information items can represent an additional level of information related to the selected information item.
Abstract:
A device with a display and a touch-sensitive surface displays a user interface including a user interface object at a first location. While displaying the user interface, the device detects a portion of an input, including a contact at a location on the touch-sensitive surface corresponding to the user interface object. In response to detecting the portion of the input: upon determining that the portion of the input meets menu-display criteria, the device displays a plurality of selectable options that corresponds to the user interface object on the display; and, upon determining that the portion of the input meets object-move criteria, the device moves the user interface object or a representation thereof from the first location to a second location according to the movement of the contact.
Abstract:
An application can generate multiple user interfaces for display across multiple electronic devices. After the electronic devices establish communication, an application running on at least one of the devices can present a first set of information items on a touch-enabled display of one of the electronic devices. The electronic device can receive a user selection of one of the first set of information items. In response to receiving the user selection, the application can generate a second set of information items for display on the other electronic device. The second set of information items can represent an additional level of information related to the selected information item.
Abstract:
The present application is related to a computer for providing output to a user. The computer includes a processor and an input device in communication with the processor. The input device includes a feedback surface and at least one sensor in communication with the feedback surface, the at least one sensor configured to detect a user input to the feedback surface. The processor varies a down-stroke threshold based on a first factor and varies an up-stroke threshold based on a second factor. The down-stroke threshold determines a first output of the computing device, the up-stroke threshold determines a second output of the computing device, and at least of the first factor or the second factor are determined based on the user input.