Abstract:
An electronic device with a touch-sensitive surface and a display displays a user interface object on the display, detects a contact on the touch-sensitive surface, and detects a first movement of the contact across the touch-sensitive surface, the first movement corresponding to performing an operation on the user interface object, and, in response to detecting the first movement, the device performs the operation and generates a first tactile output on the touch-sensitive surface. The device also detects a second movement of the contact across the touch-sensitive surface, the second movement corresponding to reversing the operation on the user interface object, and in response to detecting the second movement, the device reverses the operation and generates a second tactile output on the touch-sensitive surface, where the second tactile output is different from the first tactile output.
Abstract:
A device with a display and a touch-sensitive keyboard with one or more character keys: displays a text entry area; detects a first input on the touch-sensitive keyboard; in accordance with a determination that the first input corresponds to activation of a character key, enters a first character corresponding to the character key into the text entry area; in accordance with a determination that the first input corresponds to a character drawn on the touch-sensitive keyboard: determines one or more candidate characters for the drawn character, and displays a candidate character selection interface that includes at least one of the candidate characters; while displaying the candidate character selection interface, detects a second input that selects a respective candidate character within the candidate character selection interface; and in response to detecting the second input, enters the selected respective candidate character into the text entry area.
Abstract:
In any context where a user can view multiple different content items, switching among content items is provided using an array mode. In a full-frame mode, one content item is visible and active, but other content items may also be open. In response to user input the display can be switched to an array mode, in which all of the content items are visible in a scrollable array. Selecting a content item in array mode can result in the display returning to the full-frame mode, with the selected content item becoming visible and active. Smoothly animated transitions between the full-frame and array modes and a gesture-based interface for controlling the transitions can also be provided.
Abstract:
In some embodiments, an electronic device receives handwritten inputs in text entry fields and converts the handwritten inputs into font-based text. In some embodiments, an electronic device selects and deletes text based on inputs from a stylus. In some embodiments, an electronic device inserts text into pre-existing text based on inputs from a stylus. In some embodiments, an electronic device manages the timing of converting handwritten inputs into font-based text. In some embodiments, an electronic device presents a handwritten entry menu. In some embodiments, an electronic device controls the characteristic of handwritten inputs based on selections on the handwritten entry menu. In some embodiments, an electronic device presents autocomplete suggestions. In some embodiments, an electronic device converts handwritten input to font-based text. In some embodiments, an electronic device displays options in a content entry palette.
Abstract:
In some embodiments, an electronic device optionally identifies a person's face, and optionally performs an action in accordance with the identification. In some embodiments, an electronic device optionally determines a gaze location in a user interface, and optionally performs an action in accordance with the determination. In some embodiments, an electronic device optionally designates a user as being present at a sound-playback device in accordance with a determination that sound-detection criteria and verification criteria have been satisfied. In some embodiments, an electronic device optionally determines whether a person is further or closer than a threshold distance from a display device, and optionally provides a first or second user interface for display on the display device in accordance with the determination. In some embodiments, an electronic device optionally modifies the playing of media content in accordance with a determination that one or more presence criteria are not satisfied.
Abstract:
The present disclosure generally relates to interacting with an electronic device without touching a display screen or other physical input mechanisms. In some examples, the electronic device performs an operation in response to a positioning of a user's hand and/or an orientation of the electronic device.
Abstract:
An electronic device with a display and an embedded fingerprint sensor displays a lock screen on the display. While displaying the lock screen, the electronic device detects a first touch input on the embedded fingerprint sensor. In response to detecting the first touch input on the embedded fingerprint sensor: the electronic device, in accordance with a determination that first timing criteria are met, displays content of a plurality of messages; and the electronic device, in accordance with a determination that second timing criteria, different from the first timing criteria are met, ceases to display the lock screen and displaying a home screen use interface for the electronic device with a plurality of application icons.
Abstract:
A computing device having a touch-sensitive surface and a display, detects a stylus input on the touch-sensitive surface while displaying a user interface. A first operation is performed in the user interface in accordance with a determination that the stylus input includes movement of the stylus across the touch-sensitive surface while the stylus is detected on the touch-sensitive surface. A second operation different from the first operation is performed in the user interface in accordance with a determination that the stylus input includes rotation of the stylus around an axis of the stylus while the stylus is detected on the touch-sensitive surface. A third operation is performed in the user interface in accordance with a determination that the stylus input includes movement of the stylus across the touch-sensitive surface and rotation of the stylus around an axis of the stylus while the stylus is detected on the touch-sensitive surface.
Abstract:
An electronic device displays a first user interface that corresponds to a first application, and detects on a touch-sensitive surface a first gesture that includes movement of a contact in a respective direction on the touch-sensitive surface. In response to detecting the first gesture, the device, in accordance with a determination that the movement of the contact is in a first direction, replaces display of the first user interface with display of a second user interface that corresponds to a second application; and in accordance with a determination that the movement of the contact is in a second direction, distinct from the first direction, displays a first system user interface for interacting with a system-level function.
Abstract:
Systems and processes for operating an intelligent automated assistant are provided. In one example process, a speech input is received from a user. In response to determining that the speech input corresponds to a user intent of obtaining information associated with a user experience of the user, one or more parameters referencing a user experience of the user are identified. Metadata associated with the referenced user experience is obtained from an experiential data structure. Based on the metadata, one or more media items associated with the referenced are retrieved based on the metadata. The one or more media items associated with the referenced user experience are output together.