Abstract:
Described are systems and methods for simulating a traditional television experience with streamed on-demand content. Streamed content may be buffered to allow a user to preview content even when the content is not organized in a traditional numbered channel format. A user may progress through a series of content items (e.g. simulated channel up/down) in order to preview the on-demand content without having to unduly wait for each content item to load and/or buffer.
Abstract:
Described are systems and methods for simulating a traditional television experience with streamed on-demand content. Streamed content may be buffered to allow a user to preview content even when the content is not organized in a traditional numbered channel format. A user may progress through a series of content items (e.g. simulated channel up/down) in order to preview the on-demand content without having to unduly wait for each content item to load and/or buffer.
Abstract:
A location of a first portion of a hand and a location of a second portion of the hand are detected within a working volume, the first portion and the second portion being in a horizontal plane. A visual representation is positioned on a display based on the location of the first portion and the second portion. A selection input is initiated when a distance between the first portion and the second portion meets a predetermined threshold, to select an object presented on the display, the object being associated with the location of the visual representation. A movement of the first portion of the hand and the second portion of the hand also may be detected in the working volume while the distance between the first portion and the second portion remains below the predetermined threshold and, in response, the object on the display can be repositioned.
Abstract:
Systems and techniques are disclosed for visually rendering a requested scene based on a virtual camera perspective request as well as a projection of two or more video streams. The video streams can be captured using two dimensional cameras or three dimensional depth cameras and may capture different perspectives. The projection may be an internal projection that maps out the scene in three dimensions based on the two or more video streams. An object internal or external to the scene may be identified and the scene may be visually rendered based on a property of the object. For example, a scene may be visually rendered based on where an mobile object is located within the scene.
Abstract:
A privacy indicator is provided that shows whether sensor data are being processed in a private or non-private mode. When sensor data are used only for controlling a device locally, it may be in a private mode, which may be shown by setting the privacy indicator to a first color. When sensor data are being sent to a remote site, it may be in a non-private mode, which may be shown by setting the privacy indicator to a second color. The privacy mode may be determined by processing a command in accordance with a privacy policy of determining if the command is on a privacy whitelist, blacklist, greylist or is not present in a privacy command library. A non-private command may be blocked.
Abstract:
A privacy indicator is provided that shows whether sensor data are being processed in a private or non-private mode. When sensor data are used only for controlling a device locally, it may be in a private mode, which may be shown by setting the privacy indicator to a first color. When sensor data are being sent to a remote site, it may be in a non-private mode, which may be shown by setting the privacy indicator to a second color. The privacy mode may be determined by processing a command in accordance with a privacy policy of determining if the command is on a privacy whitelist, blacklist, greylist or is not present in a privacy command library. A non-private command may be blocked.
Abstract:
Systems and techniques are disclosed for visually rendering a requested scene based on a virtual camera perspective request as well as a projection of two or more video streams. The video streams can be captured using two dimensional cameras or three dimensional depth cameras and may capture different perspectives. The projection may be an internal projection that maps out the scene in three dimensions based on the two or more video streams. An object internal or external to the scene may be identified and the scene may be visually rendered based on a property of the object. For example, a scene may be visually rendered based on where an mobile object is located within the scene.
Abstract:
In one implementation, a computer program product can be tangibly embodied on a non-transitory computer-readable storage medium and include instructions that, when executed, are configured to detect a gesture defined by an interaction of a user within a working volume defined above a surface. Based on the detected gesture, a gesture cursor control mode can be initiated within the computing device such that the user can manipulate the cursor by moving a portion of the hand of the user within the working volume. A location of the portion of the hand of the user relative to the surface can be identified within the working volume and a cursor can be positioned within a display portion of the computing device at a location corresponding to the identified location of the portion of the hand of the user within the working volume.
Abstract:
Aspects of the present disclosure relate to controlling the functions of various devices based on spatial relationships. In one example, a system may include a depth and visual camera and a computer (networked or local) for processing data from the camera. The computer may be connected (wired or wirelessly) to any number of devices that can be controlled by the system. A user may use a mobile device to define a location in space relative to the camera. The location in space may then be associated with a controlled device as well as one or more control commands. When the location in space is subsequently occupied, the one or more control commands may be used to control the controlled device. In this regard, a user may switch a device on or off, increase volume or speed, etc. simply by occupying the location in space.
Abstract:
A privacy indicator is provided that shows whether sensor data are being processed in a private or non-private mode. When sensor data are used only for controlling a device locally, it may be in a private mode, which may be shown by setting the privacy indicator to a first color. When sensor data are being sent to a remote site, it may be in a non-private mode, which may be shown by setting the privacy indicator to a second color. The privacy mode may be determined by processing a command in accordance with a privacy policy of determining if the command is on a privacy whitelist, blacklist, greylist or is not present in a privacy command library. A non-private command may be blocked.