Abstract:
Voice commands and gesture recognition are two mechanisms by which an individual may interact with content such as that on a display. In an implementation, interactivity of a user with content on a device or display may be modified based on the distance between a user and the display. An attribute such as a user profile may be used to tailor the modification of the display to an individual user. In some configurations, the commands available to the user may also be modified based on the determined distance between the user and a device or display.
Abstract:
Described is a system and technique for providing the ability for a user to interact with one or more devices by performing gestures that mimic real-world physical analogies. More specifically, the techniques described herein provide the ability for a user to interact with a device by limiting the conscious gesturing for a computer component by camouflaging computer-recognizable gestures within manipulations of a physical objects.
Abstract:
Methods for device pairing via a cloud server are provided. In one aspect, a method includes sending an initial signal from a first device to a second device. The method includes sending a notification from the first device of a set of communication capabilities of the first device. The method also includes receiving an indication of a common communication capability between the first and second devices. The method includes initiating pairing of the first device and the second device using the common communication capability in response to the received indication. Systems and machine-readable media are also provided.
Abstract:
Implementations of the disclosed subject matter provide techniques for improved identification of a gesture based on data obtained from multiple devices. A method may include receiving an indication of an onset of a gesture, from a first device, at a gesture coordinating device. Next, first subsequent data describing the gesture may be received from a second device, at the gesture coordinating device. Based on the indication and the first subsequent data, the gesture may be identified. In response to identification of the gesture, an action may be performed based on the gesture identified. In some cases, the gesture coordinating device may be a cloud-based device.
Abstract:
In a general aspect, an apparatus can include a goggle portion having a chassis that is open on a first side, a lens assembly disposed on a second side of the chassis of the goggle portion and a ledge disposed around an interior perimeter of the chassis of the goggle portion. The ledge can be configured to physically support an electronic device inserted in the goggle portion. The apparatus can also include a cover portion having a chassis that is open on a first side and at least partially closed on a second side. The cover portion can be configured to be placed over the goggle portion, such that at least a portion of the goggle portion is disposed within the cover portion and the electronic device is retained between the ledge and an interior surface of the second side of the cover portion.
Abstract:
Described is a technique for providing onscreen visualizations of three-dimensional gestures. A display screen may display a gesture indicator that provides an indication of when a gesture begins to produce an effect and when the gesture is complete. The gesture indicator may also indicate a user's relative hand position within a single axis of a capture device's field-of-view. Once the visual indicator is positioned, the characteristics of the indicator may be altered to indicate a direction of movement along one or more dimensions. The direction of movement may be provided using a direction of movement effect. Accordingly, the visualization of a gesture may be enhanced by limiting the visualization to expressive motion along a single axis.
Abstract:
A location of a first portion of a hand and a location of a second portion of the hand are detected within a working volume, the first portion and the second portion being in a horizontal plane. A visual representation is positioned on a display based on the location of the first portion and the second portion. A selection input is initiated when a distance between the first portion and the second portion meets a predetermined threshold, to select an object presented on the display, the object being associated with the location of the visual representation. A movement of the first portion of the hand and the second portion of the hand also may be detected in the working volume while the distance between the first portion and the second portion remains below the predetermined threshold and, in response, the object on the display can be repositioned.
Abstract:
Described is a technique for providing intent-based feedback on a display screen capable of receiving gesture inputs. The intent-based approach may be based on detecting uncertainty from the user, and in response, providing gesture information. The uncertainty may be based on determining a pause from the user and the gesture information may include instructions that inform the user of the set of available input gestures. The gesture information may be displayed in one or menu tiers using a delay-based approach. Accordingly, the gesture information may be displayed in an informative and efficient manner without burdening the display screen.
Abstract:
Described is a system and technique for providing the ability for a user to interact with one or more devices by performing gestures that mimic real-world physical analogies. More specifically, the techniques described herein provide the ability for a user to interact with a device by limiting the conscious gesturing for a computer component by camouflaging computer-recognizable gestures within manipulations of a physical objects.