Abstract:
Embodiments of the present disclosure relate to automatic generation of dynamically changing layouts for a graphical user-interface. Specifically, embodiments of the present disclosure employ analysis of an image associated with the view (e.g., either the current view or a future view) of the graphical user-interface to determine colors that are complementary to the image. The colors are applied to the view, such that the color scheme of the view matches the image.
Abstract:
Information is presented to a user by accessing a library of electronic publications that includes a first publication, generating a representation of the first publication in an electronic bookshelf, determining a state for the first publication and modifying the representation of the first publication to reflect the state of the first publication.
Abstract:
An electronic device provides, to a display, data to present a user interface that includes a plurality of user interface objects, and a current focus on a first user interface object. While the display is presenting the user interface, the electronic device receives an input that corresponds to a movement of a contact across on a touch-sensitive surface. The electronic device, in response to receiving the input and in accordance with a determination that a first axis is a dominant axis, moves the current focus along the first axis by a first amount and along the second axis by a second amount. The amount of movement of the current focus along the second axis is reduced to a first non-zero amount by a scaling factor that is based on one or more inputs received prior to receiving the input.
Abstract:
An electronic device provides, to a display, data to present a user interface that includes a first group of user interface objects and a second group of user interface objects. A current focus is on a first user interface object of the first group of user interface objects. The device receives an input that corresponds to a request to move the current focus to a user interface object in the second group of user interface objects; determines a projection of the first user interface object based on a direction of the input; identifies one or more user interface objects that overlap with the projection of the first user interface object in the direction on the display that corresponds to the direction of the input; and moves the current focus to a second user interface object of the one or more identified user input objects.
Abstract:
An electronic device provides, to a display, while in a screensaver mode, data to present a first media, that includes a first visual motion effect. In response to receiving a user input on a remote user input device, a type of the user input on the remote user input device is determined. If the user input is of a first type, the device provides, to the display, data to present the first media, that includes the first visual motion effect, with corresponding descriptive text. If the user input is of a second type, the device exits the screensaver mode.
Abstract:
An electronic device provides, to a display, information identifying a plurality of media sources for a first media program, including a first media source for the first media program; episode objects that correspond episodes for the first media program available from the first media source, a first episode object of the episode objects being visually distinguished to indicate selection of the first episode object, the first episode object corresponding to a first episode of the first media program; and a first set of media management objects for the first episode, wherein the first set of media management objects includes one or more media presentation option objects that corresponds to the first episode and the first media source. In response to a user input to activate a first media presentation option object, initiating provision, to the display, of data to play the first episode.
Abstract:
An electronic device provides, to a display, data to present a user interface with a plurality of user interface objects, and a current focus is on a first user interface object. The device receives an input corresponding to movement of a contact across a touch-sensitive surface. The movement includes first and second components each corresponding to first and second axes on the display. The device moves the current focus, along the first and second axes by amounts based on magnitudes of the first and second components. The amount of movement of the current focus along a non-dominant axis is reduced relative to the amount of movement of the current focus along a dominant axis by a scaling factor that is based on a rate of movement of the contact.
Abstract:
An electronic device with one or more processors and memory is in communication with a display. The device, while in a first playback navigation mode, provides, to the display, video information for display; and receives an input that corresponds to a request by a user to switch to a second playback navigation mode. The video information includes information that corresponds to one or more frames of a video, a scrubber bar that represents a timeline of the video, a first playhead that indicates a current play position in the scrubber bar, and playback position markers, distinct from the first playhead, that indicate predetermined playback positions in the video. The device, in response to receiving the input, transitions from the first playback navigation mode to the second playback navigation mode; and, while in the second playback navigation mode, ceases to provide information that corresponds to the playback position markers.
Abstract:
An electronic device provides, to a display, data to present a user interface with a plurality of user interface objects that includes a first user interface object and a second user interface object. A current focus is on the first user interface object. The device receives an input that corresponds to a request to move the current focus; and, in response, provides, to the display, data to: move the first user interface object from a first position towards the second user interface object and/or tilt the first user interface object from a first orientation towards the second user interface object; and, after moving and/or tilting the first user interface object, move the current focus from the first user interface object to the second user interface object, and move the first user interface object back towards the first position and/or tilt the first user interface object back towards the first orientation.
Abstract:
Automatic generation of custom palettes based on an image selected by a user is disclosed. In various embodiments, automatic palette generation may involve generating one or more than one palette based on the color or shading content of the image provided by the user. The generated palette may include a variety of colors (or shadings) that can be automatically mapped to and applied to various distinct features within a composite graphic construct to be customized.