摘要:
A video sequence is played and the video sequence may be displayed on a visual output device. A user attention level is calculated for a section of the video sequence, and the user attention level is associated with the section of the video sequence.
摘要:
Presented is a method of adapting user interface elements. The method includes identifying state of a computer application, adapting one or more user interface elements associated with the computer application based on the state of the computer application, and displaying the one or more user interface elements.
摘要:
Systems, methods, and machine readable and executable instructions are provided for hand gesture recognition. A method for hand gesture recognition can include detecting, with an image input device in communication with a computing device, movement of an object. A hand pose associated with the moving object is recognized and a response corresponding to the hand pose is initiated.
摘要:
Provided is a method of assigning user interaction controls. The method assigns, in a scenario where multiple co-present users are simultaneously providing user inputs to a computing device, a first level of user interaction controls related to an object on the computing device to a single user and a second level of user interaction controls related to the object to all co-present simultaneous users of the computing device.
摘要:
The present application discloses a method for generating a shortcut for launching computer program functionality on a computer. The method comprises the steps of monitoring a sequence of user actions resulting in the launch of the functionality; determining a launch frequency of the functionality; determining a complexity rating for the launch; comparing the launch frequency and the complexity rating with a predefined criterion; and generating the shortcut for activating the functionality if the predefined criterion is met. The present application further discloses a computer program product implementing the above method and a computer comprising such a computer program.
摘要:
In one example, a method for multimodal human-machine interaction includes sensing a body posture of a participant using a camera (605) and evaluating the body posture to determine a posture-based probability of communication modalities from the participant (610). The method further includes detecting control input through a communication modality from the participant to the multimedia device (615) and weighting the control input by the posture-based probability (620).
摘要:
A method and system for attention-free user input on a computing device is described that allows the recognition of a user input irrespective of the area of entry of the user input on a writing surface (such as a digitizer) without the user having to make a visual contact with the writing surface.
摘要:
A document scanner comprises a scan bed, and a processor means which is adapted to: analyse a scanned image to detect one or more defined markings on the scanned document in addition to and adjacent the document content desired to be scanned by the user; and in response to detection of one or more defined markings, to control the document scanner in response to the defined markings.
摘要:
A document scanner comprises a scan bed, and a processor means which is adapted to: analyse a scanned image to detect one or more defined markings on the scanned document in addition to and adjacent the document content desired to be scanned by the user; and in response to detection of one or more defined markings, to control the document scanner in response to the defined markings.
摘要:
A user interface device (105) comprising a camera (110, 205) that captures an image of a user's (115) face and fingers (125) and a processor (210, 230) that determines the spatial location of the user's (115) farcical features (120) and fingers (125) using the captured image, in which the processor (210, 230) further determines where on a screen (130) of the user interface device (105) the user (115) is viewing and in which the processor (210, 230) monitors the user's (115) facial features (120) and fingers (125) for indications of manipulation of on-screen content of the user interface device (105).