Abstract:
An electronic device with a display, a touch-sensitive surface and one or more sensors to detect intensity of contacts with the touch-sensitive surface displays a respective control icon with simulated three-dimensional characteristics and a cursor over the respective control icon. The device detects, on the touch-sensitive surface, a stationary press input that includes an increase in intensity of a contact that corresponds to the cursor. In response to detecting the stationary press input, the device changes an appearance of the respective control icon in accordance with the simulated three-dimensional characteristics of the control icon and moves the cursor laterally on the display in accordance with the change in appearance of the respective control icon.
Abstract:
An electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface: detects, on the touch-sensitive surface, a gesture that includes an increase of intensity of a contact above a respective intensity threshold. In response to detecting the gesture: in accordance with a determination that the gesture includes a first number of contacts, the device generates a tactile output on the touch-sensitive surface; and in accordance with a determination that the gesture includes a second number of contacts different from the first number, the device forgoes generating the tactile output on the touch-sensitive surface.
Abstract:
In any context where a user can view multiple different content items, switching among content items is provided using an array mode. In a full-frame mode, one content item is visible and active, but other content items may also be open. In response to user input the display can be switched to an array mode, in which all of the content items are visible in a scrollable array. Selecting a content item in array mode can result in the display returning to the full-frame mode, with the selected content item becoming visible and active. Smoothly animated transitions between the full-frame and array modes and a gesture-based interface for controlling the transitions can also be provided.
Abstract:
An electronic device with a touch screen display: detects a single finger contact on the touch screen display; creates a touch area that corresponds to the single finger contact; determines a representative point within the touch area; determines if the touch area overlaps an object displayed on the touch screen display, which includes determining if one or more portions of the touch area other than the representative point overlap the object; connects the object with the touch area if the touch area overlaps the object, where connecting maintains the overlap of the object and the touch area; after connecting the object with the touch area, detects movement of the single finger contact; determines movement of the touch area that corresponds to movement of the single finger contact; and moves the object connected with the touch area in accordance with the determined movement of the touch area.
Abstract:
A computing system comprises one or more processors, memory, a first housing that includes a primary display, and a second housing at least partially containing a physical keyboard and a touch-sensitive secondary display that is distinct from the primary display. The computing system displays a first user interface on the primary display, the first user interface includes one or more user interface elements. The computing system identifies, from the one or more user interface elements, an active user interface element that is in focus on the primary display. The computer system, in accordance with a determination that the active user interface element that is in focus on the primary display is associated with the application executed by the computing system, displays, on the touch-sensitive secondary display, a second user interface that includes a first set of one or more affordances corresponding to the application.
Abstract:
In response to detecting a first gesture that includes movement of a contact on a touch-sensitive surface, in accordance with a determination that the movement of the contact is in a first direction, a first user interface corresponding to a first application of a set of applications selected based on application use criteria is displayed. While displaying the first user interface, a second gesture including movement of a contact is detected. In response to detecting the second gesture, in accordance with a determination that the movement of the contact is in the first direction, a second user interface corresponding to a second application that is a next application in the set of applications based on the application use criteria is displayed. In accordance with a determination that the movement of the contact associated is in a second direction, a third user interface corresponding to a third application is displayed.
Abstract:
An electronic device detects a contact on a touch-sensitive surface of the electronic device, including a first portion of a gesture made with the contact, a second portion of the gesture that follows the first portion, and lift-off of the contact from the touch-sensitive surface during or after the second portion of the gesture. In response to detecting the second portion of the gesture, if a modifier input was detected while detecting the first portion of the gesture made with the contact, the electronic device performs a first operation and generates a first tactile output on the touch-sensitive surface. If the modifier input was not detected while detecting the first portion of the gesture, the electronic device performs a second operation different from the first operation and generates a second tactile output on the touch-sensitive surface, wherein the second tactile output is different from the first tactile output.
Abstract:
In some embodiments, a multifunction device with a display and a touch-sensitive surface creates a plurality of workspace views. A respective workspace view is configured to contain content assigned by a user to the respective workspace view. The content includes application windows. The device displays a first workspace view in the plurality of workspace views on the display without displaying other workspace views in the plurality of workspace views and detects a first multifinger gesture on the touch-sensitive surface. In response to detecting the first multifinger gesture on the touch-sensitive surface, the device replaces display of the first workspace view with concurrent display of the plurality of workspace views.
Abstract:
A computing device having a touch-sensitive surface and a display, detects a stylus input on the touch-sensitive surface while displaying a user interface. A first operation is performed in the user interface in accordance with a determination that the stylus input includes movement of the stylus across the touch-sensitive surface while the stylus is detected on the touch-sensitive surface. A second operation different from the first operation is performed in the user interface in accordance with a determination that the stylus input includes rotation of the stylus around an axis of the stylus while the stylus is detected on the touch-sensitive surface. A third operation is performed in the user interface in accordance with a determination that the stylus input includes movement of the stylus across the touch-sensitive surface and rotation of the stylus around an axis of the stylus while the stylus is detected on the touch-sensitive surface.
Abstract:
Systems and processes for operating an intelligent automated assistant are provided. In one example process, a speech input is received from a user. In response to determining that the speech input corresponds to a user intent of obtaining information associated with a user experience of the user, one or more parameters referencing a user experience of the user are identified. Metadata associated with the referenced user experience is obtained from an experiential data structure. Based on the metadata, one or more media items associated with the referenced are retrieved based on the metadata. The one or more media items associated with the referenced user experience are output together.