Abstract:
An electronic device simultaneously displays on a touch-sensitive display a first user interface object and a second user interface object. The second user interface object has formatting attributes, one or more of which are distinct from corresponding formatting attributes in the first user interface object. The device detects a first contact on the first user interface object and a second contact on the second user interface object. While continuing to detect the first contact and the second contact, the device detects movement of the second contact across the touch-sensitive display, and moves the second user interface object in accordance with the movement of the second contact. The device changes one or more formatting attributes for the second user interface object to match corresponding formatting attributes for the first user interface object if the second user interface object contacts the first user interface object while moving.
Abstract:
A method for detecting content on a page of digital media that has a reading direction by reading image data beginning from a starting point in the reading direction of the digital media and identifying the content by analyzing differences in the image data as the image data is read. The method can include mapping a boundary of the content based on variations between content image data and surrounding background image data and generating a content map for the page using the boundary of the content, where the content map allows the page to be navigated between multiple pieces of content.
Abstract:
A method for navigating a digital media on an electronic device that can include receiving a digital book comprising a page including an associated content map of the digital media, where the content map provides the size, shape and location of content panels on the page, and displaying the page including the content panels on a display of the electronic device. The method can include receiving input to display a selected content panel in a prominent state on the display and displaying the selected panel in a prominent state on the display.
Abstract:
Various features and processes related to document collaboration are disclosed. In some implementations, animations are presented when updating a local document display to reflect changes made to the document at a remote device. In some implementations, a user can selectively highlight changes made by collaborators in a document. In some implementations, a user can select an identifier associated with another user to display a portion of a document that includes the other user's cursor location. In some implementations, text in document chat sessions can be automatically converted into hyperlinks which, when selected, cause a document editor to perform an operation.
Abstract:
Various techniques are disclosed for managing and modifying data items. In some embodiments, a first data item can be selected for password protection via establishing an active secured user session according to a set of user credentials. Thereafter, subsequent data items can be selected for password protection using the same set of user credentials while the secured user session remains active. In some embodiments, a gesture input can be received by a touch interface. The input can be detected, and when the input is recognized as a command for creating an extension of a work space associated with a data item, then the extension of the work space is generated. In some embodiments, the gesture input received by the touch interface is recognized as a command for creating a new work space associated with the data item such that a new work space is generated upon recognizing the input.
Abstract:
An electronic device with a display, a touch-sensitive surface, one or more processors, and memory detects a first portion of a gesture, and determines that the first portion has a first gesture characteristic. The device selects a dynamic disambiguation threshold in accordance with the first gesture characteristic. The dynamic disambiguation threshold is used to determine whether to perform a first type of operation or a second type of operation when a first kind of gesture is detected. The device determines that the gesture is of the first kind of gesture. After selecting the dynamic disambiguation threshold, the device determines whether the gesture meets the dynamic disambiguation threshold. When the gesture meets the dynamic disambiguation threshold, the device performs the first type of operation, and when the gesture does not meet the dynamic disambiguation threshold, the device performs the second type of operation.