Abstract:
Embodiments of the present invention are directed toward controlling electronic devices based on hand gestures detected by detecting the topography of a portion of a user's body. For example, pressure data indicative of a user's bone and tissue position corresponding to a certain movement, position, and/or pose of a user's hand may be detected. An electromyographic (EMG) sensor coupled to the user's skin can also be used to determine gestures made by the user. These sensors can be coupled to a camera that can be used to capture images, based on recognized gestures, of a device. The device can then be identified and controlled.
Abstract:
Methods and devices enable displaying selected portions of one or more webpages in user defined view windows presented on a computing device display desktop. A selected webpage may be rendered into a full-sized render buffer in order for a rendering engine to render all the elements properly. One or more view windows are created on the display desktop that show user selected portions of the render buffer. In this manner users can select portions of one or more websites for presentation on their computing device desktop, position the selected portions at their preferred locations.
Abstract:
A device includes an image capture device configured to capture a first video. The device includes a memory configured to store one or more videos. The device further includes a processor coupled to the memory. The processor is configured to concatenate the first video and a second video to generate a combined video. The second video is included in the one or more videos or is accessible via a network. The second video is selected by the processor based on a similarity of a first set of characteristics with a second set of characteristics. The first set of characteristics corresponds to the first video. The second set of characteristics corresponds to the second video.
Abstract:
A mobile platform includes one or more haptic feedback elements that are positioned in regions of the mobile platform that are proximate to a facial region of a user. By way of example, the haptic feedback elements may be electric force elements that overlay a display. The mobile platform is capable of determining a current location and receiving a desired location, which may be, e.g., a location provided by the user, a location with superior signal strength or of another mobile platform. The mobile platform determines directions from the present location to the current location and translates the direction in to control signals. Haptic signals are produced to the facial region of the user by the haptic feedback elements in response to the control signals, thereby providing the directions to the user.
Abstract:
A system, a method, and a computer program product for managing information for an interface device are provided. The system detects for one of a present physical encounter between a user of the interface device and a person, and a non-physical encounter between the user and a person. The system determines if a detected present physical encounter is an initial encounter or a subsequent encounter, and adds content associated with the person to a database of previously encountered persons when the present physical encounter is an initial encounter. When the present physical encounter is a subsequent encounter or a present non-physical encounter is detected, the system determines if the person is known by the user, and presents information to the interface device corresponding to the person when the person is not known by the user.
Abstract:
An apparatus, a method, and a computer program product for detecting a gesture of a body part relative to a surface are provided. The apparatus determines if the body part is in proximity of the surface. If the body part is in proximity of the surface, the apparatus determines if electrical activity sensed from the body part is indicative of contact between the body part and the surface. If the body part is in contact with the surface, the apparatus determines if motion activity sensed from the body part is indicative of the gesture.
Abstract:
Techniques are described to discern between intentional and unintentional gestures. A device receives a first input from one or more sensors that are coupled to a flexible material to detect input from the user provided by manipulation of the flexible material. In addition, the device receives a second input from one or more environmental sensors that are coupled to the user to detect environmental conditions associated with the user. The device correlates the first input and the second input to determine whether the first input is an intentional input by the user.
Abstract:
Various arrangements are presented to facilitate the handoff of presentation of content between a head mounted display (HMD) and another presentation device, such as a television. For example, based upon separate events, video and audio being presented to a user via a presentation device may be handed off to an HMD that the user is wearing for continued presentation. In response to a first reference event occurring, the HMD may initiate continued presentation of the video content that was being viewed by the user on the presentation device. At a later time, in response to a second reference event, the HMD may initiate continued presentation of the audio content.
Abstract:
A method, an apparatus, and a computer program product render a graphical user interface (GUI) on an optical see-through head mounted display (HMD). The apparatus obtains a location on the HMD corresponding to a user interaction with a GUI object displayed on the HMD. The GUI object may be an icon on the HMD and the user interaction may be an attempt by the user to select the icon through an eye gaze or gesture. The apparatus determines whether a spatial relationship between the location of user interaction and the GUI object satisfies a criterion, and adjusts a parameter of the GUI object when the criterion is not satisfied. The parameter may be one or more of a size of the GUI object, a size of a boundary associated with the GUI object or a location of the GUI object.
Abstract:
Various arrangements for customizing a configuration of a mobile device are presented. The mobile device may collect proximity data. The mobile device may determine that a user has gripped the mobile device based on the proximity data. A finger length of the user may be determined using the proximity data. Configuration of the mobile device may be customized at least partially based on the determined finger length of the user.