Abstract:
An electronic device receives an incoming communication and determines that the device is in a first use context. In response to receiving the incoming communication, the device provides first feedback that includes a first ongoing audio output that corresponds to the first use context and a first ongoing tactile output with a first tactile output profile that corresponds to the first use context. While providing the first ongoing audio output and the first ongoing tactile output, the device detects that the electronic device is in a second use context, different from the first use context. In response to detecting that the electronic device is in the second use context, the device provides second feedback that includes a second ongoing tactile output that has a second tactile output profile that corresponds to the second use context.
Abstract:
A method is performed at a computing system with a first housing that includes a primary display and a second housing at least partially containing (i) a physical keyboard and a touch-sensitive secondary display (“TSSD”) that is distinct from the primary display. The method includes: displaying, on the primary display, a first user interface for an application and displaying, on the TSSD, a first set of affordances corresponding to a first portion of the application. The method further includes: detecting a swipe gesture on the TSSD. If the swipe gesture was performed in a first direction, the method includes: displaying a second set of affordances corresponding to the first portion on the TSSD. If the swipe gesture was performed in a second direction substantially perpendicular to the first direction, the method includes: displaying a third set of affordances corresponding to a second portion of the application on the TSSD.
Abstract:
A wireless communication device may locate a proximate object in an environment, such as an electronic device or a resource. During this communication technique, the wireless communication device may receive a transmission that includes an identifier associated with the object. The wireless communication device may determine a range and/or a direction of the object from the wireless communication device. For example, the wireless communication device may determine the range and/or the direction, at least in part, using wireless ranging. Next, the wireless communication device may present output information that indicates the range and/or the direction. In particular, the wireless communication device may display a map of a proximate area with an indicator representative of the object shown on the map. Alternatively, the wireless communication device may display an image of the proximate area with the indicator representative of the object on the image.
Abstract:
An electronic device, while displaying representations of a plurality of collections of media items, detects a swipe input that starts at a location corresponding to a first representation of a first collection of media items in the plurality of collections of media items. In response to detecting the swipe input: in accordance with a determination that the swipe input is in a first direction, the device scrolls the representations of the plurality of collections of media items in the first direction; and, in accordance with a determination that the swipe input is in a different, second direction, the device: ceases to display a representation of a first item in the first collection of media items, and displays a representation of a second item in the first collection of media items, without scrolling; and generates a tactile output corresponding to displaying the representation of the second item.
Abstract:
A method is performed at a computing system that includes a first housing with a primary display and a second housing at least partially containing a physical keyboard and a touch-sensitive secondary display. The method includes: displaying, on the primary display, a first user interface for an application executed by the computing system. The method also includes: displaying, on the touch-sensitive secondary display, a second user interface, the second user interface comprising a set of one or more affordances corresponding to the application. The method further includes: detecting a notification and, in response to detecting the notification, concurrently displaying, in the second user interface, the set of one or more affordances corresponding to the application and at least a portion of the detected notification on the touch-sensitive secondary display. In some embodiments, the detected notification is not displayed on the primary display.
Abstract:
The computing system includes a primary display, memory, and a housing at least partially containing a physical in put mechanism and a touch screen adjacent to the physical input mechanism: displays, on the primary display, a first user interface, the first user interface comprising one or more user interface elements; and identifies an active user interface element among the one or more user interface elements that is in focus on the primary display. In accordance with a determination that the active user interface element that is in focus on the primary display is associated with an application executed by the computing system, the computing system displays a second user interface on the touch screen, including: (A) a first set of corresponding to the application; and (B) at least one system-level affordance corresponding a system-level functionality.
Abstract:
A wireless communication device may wirelessly control an object, such as a physical device, directly or through interaction with a virtual representation (or placeholder) of the object situated at a predefined physical location. In particular, the wireless communication device may identify an intent gesture performed by a user that indicates intent to control the object. For example, the intent gesture may involve pointing or orienting the wireless communication device toward the object, with or without additional input. Then, the wireless communication device may determine the object associated with the intent gesture using wireless ranging and/or device orientation. Moreover, the wireless communication device may interpret sensor data from one or more sensors associated with the wireless communication device to determine an action gesture corresponding to a command or a command value. The wireless communication device may then transmit the command value to control the object.
Abstract:
An electronic device with a touch-sensitive display detects an input on the touch-sensitive display, which includes detecting a contact on the touch-sensitive display while the touch-sensitive display is in a low power mode. In response to detecting the input on the touch-sensitive display while the touch-sensitive display is in the low-power mode, if the input meets display-activation criteria, the device displays a respective user interface on the touch-sensitive display, wherein the respective user interface was not displayed on the touch-sensitive display when the touch-sensitive display was in the low-power mode. And, if the input does not meet the display-activation criteria, the device maintains the touch-sensitive display in the low-power mode after detecting the input without displaying the respective user interface on the touch-sensitive display.
Abstract:
A computing device is disclosed. The computing device includes a housing having an illuminable portion. The computing device also includes a light device disposed inside the housing. The light device is configured to illuminate the illuminable portion.
Abstract:
In any context where a user can view multiple different content items, switching among content items is provided using an array mode. In a full-frame mode, one content item is visible and active, but other content items may also be open. In response to user input the display can be switched to an array mode, in which all of the content items are visible in a scrollable array. Selecting a content item in array mode can result in the display returning to the full-frame mode, with the selected content item becoming visible and active. Smoothly animated transitions between the full-frame and array modes and a gesture-based interface for controlling the transitions can also be provided.