Abstract:
Systems, apparatus, articles, and methods are described including operations for eye tracking based selective accentuation of portions of a display.
Abstract:
Apparatuses, systems, media and/or methods may involve creating content. A property component may be added to a media object to impart one or more of a perceptual property or a contextual property to the media object. The property component may be added responsive to an operation by a user that is independent of a direct access by the user to computer source code. An event corresponding to the property component may be mapped with an action for the media object. The event may be mapped with the action responsive to an operation by a user that is independent of a direct access by the user to computer source code. A graphical user interface may be rendered to create the content. In addition, the media object may be modified based on the action in response to the event when content created including the media object is utilized.
Abstract:
Technologies for depth-based gesture control include a computing device having a display and a depth sensor. The computing device is configured to recognize an input gesture performed by a user, determine a depth relative to the display of the input gesture based on data from the depth sensor, assign a depth plane to the input gesture as a function of the depth, and execute a user interface command based on the input gesture and the assigned depth plane. The user interface command may control a virtual object selected by depth plane, including a player character in a game. The computing device may recognize primary and secondary virtual touch planes and execute a secondary user interface command for input gestures on the secondary virtual touch plane, such as magnifying or selecting user interface elements or enabling additional functionality based on the input gesture. Other embodiments are described and claimed.
Abstract:
Perceptual computing with a conversational agent is described. In one example, a method includes receiving a statement from a user, observing user behavior, determining a user context based on the behavior, processing the user statement and user context to generate a reply to the user, and presenting the reply to the user on a user interface.
Abstract:
Technologies for adaptively embedding an advertisement into media content via contextual analysis and perceptual computing include a computing device for detecting a location to embed advertising content within media content and retrieving user profile data corresponding to a user of a computing device. Such technologies may also include determining advertising content personalized for the user based on the retrieved user profile and embedding the advertising content personalized for the user into the media content at the detected location within the media content to generate augmented media content for subsequent display to the user.
Abstract:
Methods, apparatuses and storage medium associated with engineering perceptual computing systems that includes user intent modeling are disclosed herewith. In embodiments, one or more storage medium may include instructions configured to enable a computing device to receive a usage model having a plurality of user event/behavior statistics, and to generate a plurality of traces of user events/behaviors over a period of time to form a workload. The generation may be based at least in part on the user event/behavior statistics. The workload may be for input into an emulator configured to emulate a perceptual computing system. Other embodiments may be disclosed or claimed.
Abstract:
Techniques to provide efficient stereo block matching may include receiving an object from a scene. Pixels in the scene may be identified based on the object. Stereo block matching may be performed for only the identified pixels in order to generate a depth map. Other embodiments are described and claimed.
Abstract:
Various embodiments are generally directed to techniques for providing an augmented reality view in which eye movements are employed to identify items of possible interest for which indicators are visually presented in the augmented reality view. An apparatus to present an augmented reality view includes a processor component; a presentation component for execution by the processor component to visually present images captured by a camera on a display, and to visually present an indicator identifying an item of possible interest in the captured images on the display overlying the visual presentation of the captured images; and a correlation component for execution by the processor component to track eye movement to determine a portion of the display gazed at by an eye, and to correlate the portion of the display to the item of possible interest. Other embodiments are described and claimed.
Abstract:
Systems and methods may provide for receiving a short range signal from a sensor that is collocated with a short range display and using the short range signal to detect a user interaction. Additionally, a display response may be controlled with respect to a long range display based on the user interaction. In one example, the user interaction includes one or more of an eye gaze, a hand gesture, a face gesture, a head position or a voice command, that indicates one or more of a switch between the short range display and the long range display, a drag and drop operation, a highlight operation, a click operation or a typing operation.
Abstract:
Examples are disclosed for determining an interestingness score for one or more areas of interest included in a display element such as a static image or a motion video. In some examples, an interestingness score may be determined based on eye tracking or gaze information gathered while an observer views the display element.