Abstract:
Approaches are described for providing input to a portable computing device, such as a mobile phone. A user's hand can be detected based on data (e.g., one or more images) obtained by at least one sensor of the device, such as camera, and the images can be analyzed to locate the hand of the user. As part of the location computation, the device can determine a motion being performed by the hand of the user, and the device can determine a gesture corresponding to the motion. In the situation where the device is controlling a media player capable of playing media content, the gesture can be interpreted by the device to cause the device to, e.g., pause a media track or perform another function with respect to the media content being presented via the device.
Abstract:
Approaches are described which enable a computing device (e.g., mobile phone, tablet computer) to display alternate views or layers of information within a window on the display screen when a user's finger (or other object) is detected to be within a particular range of the display screen of the device. For example, a device displaying a road map view on the display screen may detect a user's finger near the screen and, in response to detecting the finger, render a small window that shows a portion of a satellite view of the map proximate to the location of the user's finger. As the user's finger moves laterally above the screen, the window can follow the location of the user's finger and display the satellite views of the various portions of the map over which the user's finger passes.
Abstract:
Systems and approaches provide for user interfaces (UIs) that are based on object tracking. For example, the object may be a user's head or face. As the user moves his head or face and/or tilts a computing device, the content displayed on the computing device will adapt to the user's perspective. The content may include three-dimensional (3D) graphical elements projected onto a two-dimensional (2D) plane and/or the graphical elements can be associated with textural shading, shadowing, or reflections that change according to user or device motion to give the user the impression that the user is interacting with the graphical elements in 3D environment. To enhance the user experience, a state of motion of the device can be determined and jitter and/or latency corresponding to the rendering of content can be altered so as to minimize or decrease jitter when the device is stationary and/or to decrease or minimize latency when the device is in motion.
Abstract:
Systems and approaches provide for user interfaces that are based on object tracking. For example, the object may be a user's head or face. As the user moves his head or face and/or tilts a computing device, the content displayed on the computing device will adapt to the user's perspective. The content may include three-dimensional (3D) graphical elements projected onto a two-dimensional (2D) plane and/or the graphical elements can be associated with textural shading, shadowing or reflections that change according to user or device motion to give the user the impression that the user is interacting with the graphical elements in 3D environment. A state of motion of the device can be determined and jitter and/or latency corresponding to the rendering of content can be altered so as to minimize or decrease jitter when the device is stationary and/or to decrease or minimize latency when the device is in motion.
Abstract:
Approaches are described which enable a computing device (e.g., mobile phone, tablet computer) to display alternate views or layers of information within a window on the display screen when a user's finger (or other object) is detected to be within a particular range of the display screen of the device. For example, a device displaying a road map view on the display screen may detect a user's finger near the screen and, in response to detecting the finger, render a small window that shows a portion of a satellite view of the map proximate to the location of the user's finger. As the user's finger moves laterally above the screen, the window can follow the location of the user's finger and display the satellite views of the various portions of the map over which the user's finger passes.
Abstract:
Touch-based input to a computing device can be improved by providing a mechanism to lock or reduce the effects of motion in unintended directions. In one example, a user can navigate in two dimensions, then provide a gesture-based locking action through motion in a third dimension. If a computing device analyzing the gesture is able to detect the locking action, the device can limit motion outside the corresponding third dimension, or lock an interface object for selection, in order to ensure that the proper touch-based input selection is received. Various thresholds, values, or motions can be used to limit motion in one or more axes for any appropriate purpose as discussed herein.
Abstract:
A computing device can be configured to recognize when a user hovers over or is within a determined distance of an element displayed on the computing device to perform certain tasks. Information associated with the element can be displayed when such a hover input is detected. This information may comprise a description of what tasks are performed by selection of the element. This information could also be an enlarged version of the element to help the user disambiguate selection of multiple elements. The information can be displayed in a manner such that at least substantive portions of the information would not be obscured or occluded by the user.