Abstract:
A user can emulate touch screen events with motions and gestures that the user performs at a distance from a computing device. A user can utilize specific gestures, such as a pinch gesture, to designate portions of motion that are to be interpreted as input, to differentiate from other portions of the motion. A user can then perform actions such as text input by performing motions with the pinch gesture that correspond to words or other selections recognized by a text input program. A camera-based detection approach can be used to recognize the location of features performing the motions and gestures, such as a hand, finger, and/or thumb of the user.
Abstract:
A user can emulate touch screen events with motions and gestures that the user performs at a distance from a computing device. A user can utilize specific gestures, such as a pinch gesture, to designate portions of motion that are to be interpreted as input, to differentiate from other portions of the motion. A user can then perform actions such as text input by performing motions with the pinch gesture that correspond to words or other selections recognized by a text input program. A camera-based detection approach can be used to recognize the location of features performing the motions and gestures, such as a hand, finger, and/or thumb of the user.
Abstract:
A user attempting to obtain information about an object can capture image information including a view of that object, and the image information can be used with a matching or identification process to provide information about that type of object to the user. Information about the orientation of the camera and/or device used to capture the image can be provided in order to limit an initial search space for the matching or identification process. In some embodiments, images can be selected for matching based at least in part upon having a view matching the orientation of the camera or device. In other embodiments, images of objects corresponding to the orientation can be selected. Such a process can increase the average speed and efficiency in locating matching images. If a match cannot be found in the initial space, images of other views and categories can be analyzed as well.
Abstract:
Depth information can be used to assist with image processing functionality, such as image stabilization and blur reduction. In at least some embodiments, depth information obtained from stereo imaging or distance sensing, for example, can be used to determine a foreground object and background object(s) for an image or frame of video. The foreground object then can be located in later frames of video or subsequent images. Small offsets of the foreground object can be determined, and the offset accounted for by adjusting the subsequent frames or images. Such an approach provides image stabilization for at least a foreground object, while providing simplified processing and reduce power consumption. Similarly processes can be used to reduce blur for an identified foreground object in a series of images, where the blur of the identified object is analyzed.
Abstract:
Approaches are described for providing input to a portable computing device, such as a mobile phone. A user's hand can be detected based on data (e.g., one or more images) obtained by at least one sensor of the device, such as camera, and the images can be analyzed to locate the hand of the user. As part of the location computation, the device can determine a motion being performed by the hand of the user, and the device can determine a gesture corresponding to the motion. In the situation where the device is controlling a media player capable of playing media content, the gesture can be interpreted by the device to cause the device to, e.g., pause a media track or perform another function with respect to the media content being presented via the device.
Abstract:
Data provided on a first computing device is represented by a graphical object displayed on a screen. A user can initiate an “attach event” with a gesture to enable the graphical object to be associated and/or virtually attached to the user and/or a user's hand/fingers. An image capture component can view/track user movements. Based on the viewed/tracked movements, the graphical object representing the data on the first computing device can be moved on a screen of the first computing device to correspond to the movement of the user's hand/finger. The graphical object also can be moved to a position on a screen of a second computing device when the user moves a hand/fingers to an area corresponding to the position. A user may initiate a “release event” with a gesture and can end the association and enable the data to be sent to the second computing device.
Abstract:
Depth information can be used to assist with image processing functionality, such as image stabilization and blur reduction. In at least some embodiments, depth information obtained from stereo imaging or distance sensing, for example, can be used to determine a foreground object and background object(s) for an image or frame of video. The foreground object then can be located in later frames of video or subsequent images. Small offsets of the foreground object can be determined, and the offset accounted for by adjusting the subsequent frames or images. Such an approach provides image stabilization for at least a foreground object, while providing simplified processing and reduce power consumption. Similarly processes can be used to reduce blur for an identified foreground object in a series of images, where the blur of the identified object is analyzed.
Abstract:
A computing device can obtain information about how the device is held, moved, and/or used by a hand of a user holding the device. The information can be obtained utilizing one or more sensors of the device independently or working in conjunction. For example, an orientation sensor can determine whether a left hand or a right hand is likely rotating, tilting, and/or moving, and thus holding, the device. In another example, a camera and/or a hover sensor can obtain information about a finger position of the user's hand to determine whether the hand is likely a left hand or a right hand. In a further example, a touch sensor can determine a shape of an imprint of a portion of the user's hand to determine which hand is likely holding the device. Based on which hand is holding the device, the device can improve one or more computing tasks.
Abstract:
Approaches are described which enable a computing device (e.g., mobile phone, tablet computer) to display alternate views or layers of information within a window on the display screen when a user's finger (or other object) is detected to be within a particular range of the display screen of the device. For example, a device displaying a road map view on the display screen may detect a user's finger near the screen and, in response to detecting the finger, render a small window that shows a portion of a satellite view of the map proximate to the location of the user's finger. As the user's finger moves laterally above the screen, the window can follow the location of the user's finger and display the satellite views of the various portions of the map over which the user's finger passes.
Abstract:
A computing device can obtain information about how the device is held, moved, and/or used by a hand of a user holding the device. The information can be obtained utilizing one or more sensors of the device independently or working in conjunction. For example, an orientation sensor can determine whether a left hand or a right hand is likely rotating, tilting, and/or moving, and thus holding, the device. In another example, a camera and/or a hover sensor can obtain information about a finger position of the user's hand to determine whether the hand is likely a left hand or a right hand. In a further example, a touch sensor can determine a shape of an imprint of a portion of the user's hand to determine which hand is likely holding the device. Based on which hand is holding the device, the device can improve one or more computing tasks.