Abstract:
A data processing system composites graphics content, generated by an application program running on the data processing system, to generate image data. The data processing system stores the image data in a first framebuffer and displays an image generated from the image data in the first framebuffer on an internal display device of the data processing system. A scaler in the data processing system performs scaling operations on the image data in the first framebuffer, stores the scaled image data in a second framebuffer and displays an image generated from the scaled image data in the second framebuffer on an external display device coupled to the data processing system. The scaler performs the scaling operations asynchronously with respect to the compositing of the graphics content. The data processing system automatically mirrors the image on the external display device unless the application program is publishing additional graphics content for display on the external display device.
Abstract:
Methods and apparatus for a predictive rendering component that may generate a rendering of a character based at least in part on predictive information regarding the background into which the character is to be rendered. Using such predictive information, the predictive rendering component may produce a rendering of a character that blends into the character background more smoothly than if the predictive background information were not used. In this way, the predictive rendering component improves upon previous implementations of font smoothing.
Abstract:
The subject technology receives a command to initiate an application. The subject technology, in response to the command, generates a root node related to a root view of a first hierarchy of views representing a user interface (UI). The subject technology generates a child node of the root node for including in the first hierarchy of views, the child node corresponding to a first type of view. The subject technology generates a first child node of the child node for including in the first hierarchy of views, the first child node corresponding to a second type of view. The subject technology generates a graph including nodes, each node corresponding to a different attribute of the UI, wherein the root node, the child node, and the first child node have relationships with respective nodes from the graph.
Abstract:
The subject technology sends, from a parent node of a hierarchy of views, information related to a preference list, the preference list include preference keys corresponding to respective attributes of a UI, where the hierarchy of views represents the UI. The subject technology receives, at a child node of the parent node, the information related to the preference list. The subject technology updates, by the child node, a particular preference key from the preference list to a particular value, the particular preference key related to an attribute of the UI.
Abstract:
Implementations of the subject technology provide a framework to support creating user interfaces (UI) and animations within the UIs. The subject technology receives first information related to an animation, the first information including an initial state, a destination state, and an animation function. The subject technology generates a copy of the destination state, the copy of the destination state comprising a record for the animation based at least in part on the first information related to the animation and further information related to the animation function. The subject technology updates a value related to an intermediate state of the animation in the copy of the destination state, the intermediate state being between the initial state and the destination state. Further, the subject technology provides the copy of the destination state that includes the value related to the intermediate state for rendering the animation.
Abstract:
A data processing system composites graphics content, generated by an application program running on the data processing system, to generate image data. The data processing system stores the image data in a first framebuffer and displays an image generated from the image data in the first framebuffer on an internal display device of the data processing system. A scaler in the data processing system performs scaling operations on the image data in the first framebuffer, stores the scaled image data in a second framebuffer and displays an image generated from the scaled image data in the second framebuffer on an external display device coupled to the data processing system. The scaler performs the scaling operations asynchronously with respect to the compositing of the graphics content. The data processing system automatically mirrors the image on the external display device unless the application program is publishing additional graphics content for display on the external display device.
Abstract:
An electronic device detects an input via an input device. In response to detecting the input, the device monitors the input using a gesture recognition tree having a plurality of nodes. Each respective node of the gesture recognition tree corresponds to a respective gesture recognizer or a respective component gesture recognizer, and one or more nodes include one or more parameters that describe the input. Monitoring the input using the gesture recognition tree includes: processing the input using a first node of the plurality of nodes, including determining a value of a first parameter of the one or more parameters; conveying the first parameter from the first node to a second node of the plurality of nodes; and processing the input using the second node, including determining, based on the first parameter, whether the input satisfies a gesture recognition requirement defined by the second node.
Abstract:
An electronic device detects an input via an input device. In response to detecting the input, the device monitors the input using a gesture recognition tree having a plurality of nodes. Each respective node of the gesture recognition tree corresponds to a respective gesture recognizer or a respective component gesture recognizer, and one or more nodes include one or more parameters that describe the input. Monitoring the input using the gesture recognition tree includes: processing the input using a first node of the plurality of nodes, including determining a value of a first parameter of the one or more parameters; conveying the first parameter from the first node to a second node of the plurality of nodes; and processing the input using the second node, including determining, based on the first parameter, whether the input satisfies a gesture recognition requirement defined by the second node.
Abstract:
An electronic device displays, on a display, a user interface of an application. The user interface includes a plurality of views arranged in a view hierarchy that defines a first relationship between a first view and a second view. The first view includes a first gesture recognizer, and the second view includes a second gesture recognizer. The device detects, via the input device, an input at a first location that corresponds to the displayed user interface, and processes the input using a gesture recognition hierarchy that includes the first gesture recognizer and the second gesture recognizer. A second relationship between the first gesture recognizer and the second gesture recognizer is determined based on the first relationship between the first view and the second view in the view hierarchy.
Abstract:
An electronic device displays, on a display, a user interface of an application. The user interface includes a plurality of views arranged in a view hierarchy that defines a first relationship between a first view and a second view. The first view includes a first gesture recognizer, and the second view includes a second gesture recognizer. The device detects, via the input device, an input at a first location that corresponds to the displayed user interface, and processes the input using a gesture recognition hierarchy that includes the first gesture recognizer and the second gesture recognizer. A second relationship between the first gesture recognizer and the second gesture recognizer is determined based on the first relationship between the first view and the second view in the view hierarchy.