Abstract:
An electronic device with a display and a touch-sensitive surface displays, on the display, a first visual indicator that corresponds to a virtual touch. The device receives a first input from an adaptive input device. In response to receiving the first input from the adaptive input device, the device displays a first menu on the display. The first menu includes a virtual touches selection icon. In response to detecting selection of the virtual touches selection icon, a menu of virtual multitouch contacts is displayed.
Abstract:
An electronic device obtains one or more images of a scene, and displays a preview of the scene. If the electronic device meets levelness criteria, the electronic device provides a first audible and/or tactile output indicating that the camera is obtaining level images of the scene. In some embodiments, the electronic device detects, using one or more sensors, an orientation of a first axis of the electronic device relative to a respective vector, and the levelness criteria include a criterion that is met when the first axis of the electronic device moves within a predefined range of the respective vector. In some embodiments, if the orientation of the first axis of the electronic device moves outside of the predefined range of the respective vector, a second audible and/or tactile output, indicating that the camera is not obtaining level images of the scene, is provided.
Abstract:
The present disclosure generally relates to providing time feedback on an electronic device, and in particular to providing non-visual time feedback on the electronic device. Techniques for providing non-visual time feedback include detecting an input and, in response to detecting the input, initiating output of a first type of non-visual indication of a current time or a second type of non-visual indication of the current time based on the set of non-visual time output criteria met by the input. Techniques for providing non-visual time feedback also include, in response to detecting that a current time has reached a first predetermined time of a set of one or more predetermined times, outputting a first non-visual alert or a second non-visual alert based on a type of watch face that the electronic device is configured to display.
Abstract:
An electronic device displays a first user interface including user interface objects. While displaying the first user interface, the device detects a first input on the touch-sensitive surface. In response, if the first input is detected at a location on the touch-sensitive surface that corresponds to a first user interface object of the first user interface and that the first input satisfies first input intensity criteria, the device performs a first operation, including displaying a zoomed-in view of at least a first portion of the first user interface; and, if the first input is detected at a location on the touch-sensitive surface that corresponds to the first user interface object of the first user interface and that the first input does not satisfy first input intensity criteria, the device performs a second operation that is distinct from the first operation.
Abstract:
An electronic device displays a user interface that includes a plurality of affordances including a first affordance. The first affordance is selectable to perform a respective operation. The first affordance is displayed at a first size. The device receives a user input at a location corresponding to the first affordance. In response to receiving the user input and in accordance with a determination that a text display setting has a first value, the device displays an overlay that includes an enlarged representation of the first affordance. The enlarged representation of the first affordance has a second size that is bigger than the first size. In response to receiving the user input and in accordance with a determination that the text display setting has a second value that is different from the first value, the device forgoes display of the enlarged representation of the first affordance.
Abstract:
The present disclosure describes technology, which can be implemented as a method, apparatus, and/or computer software embodied in a computer-readable medium, and which, among other things, be used to create custom feedback patterns in response to user input, for example, in response to the user inputting a desired pattern of tactile events detected on a mobile device. For example, one or more aspects of the subject matter described in this disclosure can be embodied in one or more methods that include receiving tactile input from a user of an electronic device specifying a custom feedback pattern, in concert with receiving tactile input, providing feedback to the user corresponding to the received tactile input, and storing the specified custom feedback pattern for use by the electronic device to actuate feedback signaling a predetermined notification event.
Abstract:
While an electronic device with a display and a touch-sensitive surface is in a screen reader accessibility mode, the device displays an application launcher screen including a plurality of application icons. A respective application icon corresponds to a respective application stored in the device. The device detects a sequence of one or more gestures on the touch-sensitive surface that correspond to one or more characters. A respective gesture that corresponds to a respective character is a single finger gesture that moves across the touch-sensitive surface along a respective path that corresponds to the respective character. The device determines whether the detected sequence of one or more gestures corresponds to a respective application icon of the plurality of application icons, and, in response to determining that the detected sequence of one or more gestures corresponds to the respective application icon, performs a predefined operation associated with the respective application icon.
Abstract:
Techniques for controlling a touch input device using an accessory communicatively coupled to the device are disclosed. In one aspect, an accessibility framework is launched on the device. An accessory coupled to the device is detected. Receipt of input from the accessory is enabled. An accessibility packet is received from the accessory. The accessibility packet includes an accessibility command and one or more parameters. The accessibility packet is processed to extract the first accessibility command and the one or more parameters. Input is generated for the accessibility framework based on the accessibility command and the one or more parameters. In some implementations, the device also sends accessibility commands to the accessory, either in response to accessibility commands received from the accessory or independent of any received accessibility commands.
Abstract:
While an electronic device with a display and a touch-sensitive surface is in a screen reader accessibility mode, the device displays an application launcher screen including a plurality of application icons. A respective application icon corresponds to a respective application stored in the device. The device detects a sequence of one or more gestures on the touch-sensitive surface that correspond to one or more characters. A respective gesture that corresponds to a respective character is a single finger gesture that moves across the touch-sensitive surface along a respective path that corresponds to the respective character. The device determines whether the detected sequence of one or more gestures corresponds to a respective application icon of the plurality of application icons, and, in response to determining that the detected sequence of one or more gestures corresponds to the respective application icon, performs a predefined operation associated with the respective application icon.
Abstract:
Some embodiments described in this disclosure are directed to a first electronic device that operates in a remote interaction mode with a second electronic device, where user interactions with images displayed on the first electronic device cause the second electronic device to update display of the images and/or corresponding user interfaces on the second electronic device.