Abstract:
An enhanced interface for voice and video communications, in which a gesture of a user is recognized from a sequence of camera images, and a user interface is provided include a control and a representation of the user. The process also includes causing the representation to interact with the control based on the recognized gesture, and controlling a telecommunication session based on the interaction.
Abstract:
A method for use with a head-mounted display in a physical environment includes obtaining depth information of the physical environment and capturing a visual image of the physical environment. The method also includes determining a spatial relationship between a user of the head-mounted display and one or more physical objects included in the physical environment based on the depth information. The visual image is then segmented based on the spatial relationship to generate a segmented image that includes the one or more physical objects. The segmented image is then overlaid on a virtual image to display both the virtual image and the one or more physical objects on the head-mounted display.
Abstract:
Disclosed is a method and apparatus for biometric based media data sharing. The method may include initiating, in a first device, biometric data capture of a user based, at least in part, on playback of media data by the first device. The method may also include determining that captured biometric data of the user does not correspond with biometric data associated with an authorized user of the first device. Furthermore, the method may also include in response to a failure to match the captured biometric data by the first device, establishing that the user is an authorized user of a second device based, at least in part, on the captured biometric data. The method may also include sharing the media data with the second device.
Abstract:
Systems, apparatus and methods for determining a gesture are presented. According to some aspects, disclosed are systems, apparatus and methods for determining a gesture that compares different images and deduces a direction and/or distance based on a relative size change of a palm in the different images. After a reference palm size is registered, subsequent palm sizes are compared to the reference to determine if and/or how much the hand is moving. The hand gesture is determined based on these relative changes in hand movement.
Abstract:
Various arrangements for defining a marker are presented. A first defined marker presented by a public display device may be determined to be insufficient for use by a head mounted display. The first defined marker may be used as a reference point for positioning information for display by the head mounted display. In response to determining that the first defined marker is insufficient, a second marker displayed by the public display device may be defined. The second marker may have a display characteristic different from the first defined marker. The second defined marker may then be used as the reference point for positioning the information for display by the head mounted display. An indication of the second marker may be transmitted to the head mounted display.
Abstract:
Various arrangements for presenting private information are presented. Private information to be displayed via a head mounted display to a user may be identified. A marker displayed by a public display device may also be identified. This public display device may be visible in a vicinity of the user. The private information and an indication of the marker may be output to the head-mounted display of the user, such that the private information is displayed by the head-mounted display in relation to the marker displayed by the public display device.
Abstract:
Hover detection technology, in which an image is captured from a camera while an illumination source is illuminating an area in front of a display surface and the image captured by the camera is analyzed to detect an object within an anticipated input region based on illumination of the illumination source. User input is determined based on the object detected within the anticipated input region and an application is controlled based on the determined user input.