Abstract:
Systems and techniques are disclosed for visually rendering a requested scene based on a virtual camera perspective request as well as a projection of two or more video streams. The video streams can be captured using two dimensional cameras or three dimensional depth cameras and may capture different perspectives. The projection may be an internal projection that maps out the scene in three dimensions based on the two or more video streams. An object internal or external to the scene may be identified and the scene may be visually rendered based on a property of the object. For example, a scene may be visually rendered based on where an mobile object is located within the scene.
Abstract:
Voice commands and gesture recognition are two mechanisms by which an individual may interact with content such as that on a display. In an implementation, interactivity of a user with content on a device or display may be modified based on the distance between a user and the display. An attribute such as a user profile may be used to tailor the modification of the display to an individual user. In some configurations, the commands available to the user may also be modified based on the determined distance between the user and a device or display.
Abstract:
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for providing gesture discoverability with a mobile computing device. In one aspect, a method include actions of obtaining gesture definition data for a particular gesture. The gesture definition data specifies a predefined onset position associated with commencement of the particular gesture, a predefined motion associated with completion of the particular gesture, a particular action that is triggered upon the completion of the particular gesture, and a visual indicator for visually indicating a progress toward the completion of the particular gesture as the predefined motion is performed. Additional actions include determining that an orientation of a mobile computing device matches the onset position of the particular gesture, providing the visual indicator, determining a motion, determining whether the motion matches the predefined motion, and determining whether to update the visual indicator.
Abstract:
Described is a system and technique for providing the ability for a user to interact with one or more devices by performing gestures that mimic real-world physical analogies. More specifically, the techniques described herein provide the ability for a user to interact with a device by limiting the conscious gesturing for a computer component by camouflaging computer-recognizable gestures within manipulations of a physical objects.
Abstract:
Disclosed are techniques for detecting a gesture performed at a first distance and at a second distance. A first aspect of a target may be manipulated according to the first gesture at the first distance and a second aspect of the target may be manipulated according to the first gesture at the second distance.
Abstract:
A privacy indicator is provided that shows whether sensor data are being processed in a private or non-private mode. When sensor data are used only for controlling a device locally, it may be in a private mode, which may be shown by setting the privacy indicator to a first color. When sensor data are being sent to a remote site, it may be in a non-private mode, which may be shown by setting the privacy indicator to a second color. The privacy mode may be determined by processing a command in accordance with a privacy policy of determining if the command is on a privacy whitelist, blacklist, greylist or is not present in a privacy command library. A non-private command may be blocked.
Abstract:
Described is a system and technique to supplement speech commands with gestures. A user interface may be improved by providing the ability for a user to intuitively provide speech commands with the aid of gestures. By providing gestures contemporaneously with a speech command, the user may delimit the commencement and end of a command thereby allowing the system to provide an immediate response. In addition, gestures may be detected in order to determine a source of a provided speech command, and accordingly, user specific actions may be performed based on the identity of the source.
Abstract:
Described is a system and technique for providing the ability for a user to interact with one or more devices by performing gestures that mimic real-world physical analogies. More specifically, the techniques described herein provide the ability for a user to interact with a device by limiting the conscious gesturing for a computer component by camouflaging computer-recognizable gestures within manipulations of a physical objects.
Abstract:
An interaction spot is provided that may detect the presence of an electronic device such as a smartphone. A user may make a physical motion with the smartphone proximal to the interaction spot such as moving it upward. The interaction spot may communicate with a second device such as a light or a household appliance. A setting of the second device may be adjusted based on the motion of the electronic device.
Abstract:
Systems, methods, and media for causing an action to be performed on a user device are provided. In some implementations, the systems comprise: a first user device comprising at least one hardware processor that is configured to: detect a second user device in proximity to the first user device; receive a user input indicative of an action to be performed; determine a plurality of candidate devices that are capable of performing the action, wherein the plurality of candidate devices includes the second user device; determine a plurality of device types corresponding to the plurality of candidate devices; determine a plurality of priorities associated with the plurality of candidate devices based at least in part on the plurality of device types; select a target device from the plurality of candidate devices based at least in part on the plurality of priorities; and cause the action to be performed by the target device.