Abstract:
Disclosed in some examples are methods systems and machine readable mediums in which actions or states of a first user (e.g., natural interactions) having a first corresponding computing device are observed by a sensor on a second computing device corresponding to a second user. A notification describing the observed actions or states of a first user may be shared across a network with the first corresponding computing device. In this way, the first computing device may be provided with information concerning the state of the user without having to directly sense the user.
Abstract:
With a device comprising a directional antenna, obtain an interaction profile for an augmentable object and augment a sensory experience of the augmentable object according to the interaction profile.
Abstract:
Methods, systems, and computer program products allow for the capturing of a high depth of field (DOF) image. A comprehensive depth map of the scene may be automatically determined. The scene may then be segmented, where each segment of the same corresponds to a respective depth of the depth map. A sequence of images may then be recorded, where each image in the sequence is focused at a respective depth of the depth map. The images of this sequence may then be interleaved to create a single composite image that includes the respective in-focus segments from these images.
Abstract:
Various embodiments are generally directed to techniques for providing an augmented reality view in which eye movements are employed to identify items of possible interest for which indicators are visually presented in the augmented reality view. An apparatus to present an augmented reality view includes a processor component; a presentation component for execution by the processor component to visually present images captured by a camera on a display, and to visually present an indicator identifying an item of possible interest in the captured images on the display overlying the visual presentation of the captured images; and a correlation component for execution by the processor component to track eye movement to determine a portion of the display gazed at by an eye, and to correlate the portion of the display to the item of possible interest. Other embodiments are described and claimed.
Abstract:
Computer-readable storage media, computing devices and methods are discussed herein. In embodiments, a computing device may include one or more display devices, a digital content module coupled with the one or more display devices, and an augmentation module coupled with the digital content module and the one or more display devices. The digital content module may be configured to cause a portion of textual content to be rendered on the one or more display devices. The textual content may be associated with a digital scene that may be utilized to augment the textual content. The augmentation module may be configured to dynamically adapt the digital scene, based at least in part on a real-time video feed, to be rendered on the one or more display devices to augment the textual content. Other embodiments may be described and/or claimed.
Abstract:
Computer-readable storage media, computing devices, and methods associated with an adaptive learning environment associated with an adaptive learning environment are disclosed. In embodiments, a computing device may include an instruction module and an adaptation module operatively coupled with the instruction module. The instruction module may selectively provide instructional content of one of a plurality of instructional content types to a user of the computing device via one or more output devices coupled with the computing device. The adaptation module may determine, in real-time, an engagement level associated with the user of the computing device and may cooperate with the instruction module to dynamically adapt the instructional content provided to the user based at least in part on the engagement level determined. Other embodiments may be described and/or claimed.
Abstract:
Computer-readable storage media, computing devices, and methods associated with an adaptive learning environment associated with an adaptive learning environment are disclosed. In embodiments, a computing device may include an instruction module and an adaptation module operatively coupled with the instruction module. The instruction module may selectively provide instructional content of one of a plurality of instructional content types to a user of the computing device via one or more output devices coupled with the computing device. The adaptation module may determine, in real-time, an engagement level associated with the user of the computing device and may cooperate with the instruction module to dynamically adapt the instructional content provided to the user based at least in part on the engagement level determined. Other embodiments may be described and/or claimed.
Abstract:
Systems and methods may provide for receiving a short range signal from a sensor that is collocated with a short range display and using the short range signal to detect a user interaction. Additionally, a display response may be controlled with respect to a long range display based on the user interaction. In one example, the user interaction includes one or more of an eye gaze, a hand gesture, a face gesture, a head position or a voice command, that indicates one or more of a switch between the short range display and the long range display, a drag and drop operation, a highlight operation, a click operation or a typing operation.
Abstract:
With a device comprising a directional antenna, obtain an interaction profile for an augmentable object and augment a sensory experience of the augmentable object according to the interaction profile.
Abstract:
Systems, apparatuses, and/or methods to augment reality. An object identifier may identify an object in a field of view of a user that includes a reflection of the user from a reflective surface, such as a surface of a traditional mirror. In addition, a reality augmenter may generate an augmented reality object based on the identification of the object. In one example, eyeglasses including a relatively transparent display screen may be coupled with an image capture device on the user and the augmented reality object may be observable by the user on the transparent display screen when the user wears the eyeglasses. A localizer may position the augmented reality object on the transparent display screen relative to the reflection of the user that passes though the transparent display screen during natural visual perception of the reflection by the user.