Abstract:
Described is a system and technique for providing the ability for a user to interact with one or more devices by performing gestures that mimic real-world physical analogies. More specifically, the techniques described herein provide the ability for a user to interact with a device by limiting the conscious gesturing for a computer component by camouflaging computer-recognizable gestures within manipulations of a physical objects.
Abstract:
An interaction spot is provided that may detect the presence of an electronic device such as a smartphone. A user may make a physical motion with the smartphone proximal to the interaction spot such as moving it upward. The interaction spot may communicate with a second device such as a light or a household appliance. A setting of the second device may be adjusted based on the motion of the electronic device.
Abstract:
A privacy indicator is provided that shows whether sensor data are being processed in a private or non-private mode. When sensor data are used only for controlling a device locally, it may be in a private mode, which may be shown by setting the privacy indicator to a first color. When sensor data are being sent to a remote site, it may be in a non-private mode, which may be shown by setting the privacy indicator to a second color. The privacy mode may be determined by processing a command in accordance with a privacy policy of determining if the command is on a privacy whitelist, blacklist, greylist or is not present in a privacy command library. A non-private command may be blocked.
Abstract:
Among other things, this document describes a computer-implemented method. The method can include receiving, at a first device, an indication of user input to cause the first device to establish a wireless data connection with another device. A determination can be made at the first device that one or more sensors on the first device are oriented toward a second device. In response to at least one of (i) receiving the indication of user input to cause the first device to establish a wireless data connection with another device and (ii) determining that the one or more sensors on the first device are oriented toward the second device, a first wireless data connection can be established between the first device and the second device. A first stream of audio data can be received and played at the first device.
Abstract:
A function of a device, such as volume, may be controlled using a combination of gesture recognition and an interpolation scheme. Distance between two objects such as a user's hands may be determined at a first time point and a second time point. The difference between the distances calculated at two time points may be mapped onto a plot of determined difference versus a value of the function to set the function of a device to the mapped value.
Abstract:
Described is a system and technique allowing a user to interact with a device using self-referential gestures. Self-referential gestures allow a user to rely on their inherent knowledge of body positioning to allow movements such as hand movements to be intuitively performed. The disclosure describes determining various reference points on the user and detecting hand movements relative to these reference points. In addition, a device may define axes and/or an origin in a three-dimensional space relative to a position of the user within a field-of-view of a capture device. Accordingly, gesture movements may be detected and/or measured based on references that correspond to the user's body in order to provide a more intuitive interaction experience.
Abstract:
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for providing gesture discoverability with a mobile computing device. In one aspect, a method include actions of obtaining gesture definition data for a particular gesture. The gesture definition data specifies a predefined onset position associated with commencement of the particular gesture, a predefined motion associated with completion of the particular gesture, a particular action that is triggered upon the completion of the particular gesture, and a visual indicator for visually indicating a progress toward the completion of the particular gesture as the predefined motion is performed. Additional actions include determining that an orientation of a mobile computing device matches the onset position of the particular gesture, providing the visual indicator, determining a motion, determining whether the motion matches the predefined motion, and determining whether to update the visual indicator.
Abstract:
Voice commands and gesture recognition are two mechanisms by which an individual may interact with content such as that on a display. In an implementation, interactivity of a user with content on a device or display may be modified based on the distance between a user and the display. An attribute such as a user profile may be used to tailor the modification of the display to an individual user. In some configurations, the commands available to the user may also be modified based on the determined distance between the user and a device or display.
Abstract:
Described is a system and technique for providing the ability for a user to interact with one or more devices by performing gestures that mimic real-world physical analogies. More specifically, the techniques described herein provide the ability for a user to interact with a device by limiting the conscious gesturing for a computer component by camouflaging computer-recognizable gestures within manipulations of a physical objects.
Abstract:
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for providing gesture discoverability with a mobile computing device. In one aspect, a method include actions of obtaining gesture definition data for a particular gesture. The gesture definition data specifies a predefined onset position associated with commencement of the particular gesture, a predefined motion associated with completion of the particular gesture, a particular action that is triggered upon the completion of the particular gesture, and a visual indicator for visually indicating a progress toward the completion of the particular gesture as the predefined motion is performed. Additional actions include determining that an orientation of a mobile computing device matches the onset position of the particular gesture, providing the visual indicator, determining a motion, determining whether the motion matches the predefined motion, and determining whether to update the visual indicator.