Abstract:
The present disclosure generally relates to techniques and interfaces for generating synthesized speech outputs. For example, a user interface for a text-to-speech service can include ranked and/or categorized phrases, which can be selected to enter as text. A synthesized speech output is then generated to deliver any entered text, for example, using a personalized voice model.
Abstract:
The present disclosure generally relates to enlarging user interface elements. An example method includes displaying a first version of first content; while displaying the first version of first content, displaying a focus indicator at a first location that does not correspond to the first version of first content; receiving first input; in response to receiving the first input, moving the focus indicator to a second location; in accordance with the second location corresponding to the first version and a set of second version display criteria being met, concurrently displaying at least a portion of the first version of first content and a second version of first content, wherein the second version differs from the first version in a visual characteristic other than size; and in accordance with the second location not corresponding to the first version of first content, forgoing display of the second version of first content.
Abstract:
An electronic device includes a touch-sensitive display, a rotatable input mechanism, one or more processors, and memory. The electronic device displays content on the display, where the content includes a first edge and a second edge opposite the first edge. The electronic device further detects a first user input, and in response to detecting the first user input, displays an enlarged view of the content that does not include the first edge. The electronic device further detects a rotation of the rotatable input mechanism in a first rotation direction, and in response to detecting the rotation, translates the enlarged view of the content in a first translation direction on the display to display the first edge of the content.
Abstract:
While an electronic device with a display and a touch-sensitive surface is in a screen reader accessibility mode, the device displays a character input area and a keyboard, the keyboard including a plurality of key icons. The device detects a sequence of one or more gestures on the touch-sensitive surface that correspond to one or more characters. A respective gesture of the one or more gestures that corresponds to a respective character is a single finger gesture that moves across the touch-sensitive surface along a respective path that corresponds to the respective character. The respective path traverses one or more locations on the touch-sensitive surface that correspond to one or more key icons of the plurality of key icons without activating the one or more key icons. In response to detecting the respective gesture, the device enters the corresponding respective character in the character input area of the display.
Abstract:
An electronic device outputs a first caption of a plurality of captions while a first segment of a video is being played, where the first video segment corresponds to the first caption. While outputting the first caption, the device receives a first user input. In response to receiving the first user input, the device determines a second caption in the plurality of captions, distinct from the first caption, that meets predefined caption selection criteria; determines a second segment of the video that corresponds to the second caption; sends instructions to change from playing the first segment of the video to playing the second segment of the video; and outputs the second caption.
Abstract:
An electronic device includes a display, a rotatable input mechanism, one or more processors, and memory. The electronic device displays content on the display and detects a first user input. In response to detecting the first user input, the electronic displays an enlarged view of the content that includes displaying an enlarged first portion of the content without displaying a second portion of the content. While displaying the enlarged view of the enlarged first portion of the content, in response to detecting a rotation of the rotatable input mechanism, the electronic device performs different tasks based on the operational state of the electronic device.
Abstract:
While an electronic device with a display and a touch-sensitive surface is in a screen reader accessibility mode, the device displays an application launcher screen including a plurality of application icons. A respective application icon corresponds to a respective application stored in the device. The device detects a sequence of one or more gestures on the touch-sensitive surface that correspond to one or more characters. A respective gesture that corresponds to a respective character is a single finger gesture that moves across the touch-sensitive surface along a respective path that corresponds to the respective character. The device determines whether the detected sequence of one or more gestures corresponds to a respective application icon of the plurality of application icons, and, in response to determining that the detected sequence of one or more gestures corresponds to the respective application icon, performs a predefined operation associated with the respective application icon.
Abstract:
The present disclosure generally relates to managing one or more displays. In particular, methods for moving an indicator between displays, moving an indicator based on body movement, and managing display operations are discussed herein.
Abstract:
An accessibility method is performed by an electronic device with a display and a touch-sensitive surface. The method includes: displaying a first section of a document on the display, where the document has a plurality of sections that includes the first section of the document; outputting an audible document section indicia that corresponds to the first section of the document; and detecting a first finger gesture on the touch-sensitive surface. The method also includes, in response to detecting the first finger gesture, ceasing to display the first section of the document; displaying a second section of the document on the display, where the second section of the document is adjacent to the first section of the document; and outputting an audible document section indicia that corresponds to the second section of the document.
Abstract:
Systems and processes for scanning a user interface are disclosed. One process can include scanning multiple elements within a user interface by highlighting the elements. The process can further include receiving a selection while one of the elements is highlighted and performing an action on the element that was highlighted when the selection was received. The action can include scanning the contents of the selected element or performing an action associated with the selected element. The process can be used to navigate an array of application icons, a menu of options, a standard desktop or laptop operating system interface, or the like. The process can also be used to perform gestures on a touch-sensitive device or mouse and track pad gestures (e.g., flick, tap, or freehand gestures).