Abstract:
A mechanism is described for facilitating smart measurement of body dimensions despite loose clothing and/or other obscurities according to one embodiment. A method of embodiments, as described herein, includes capturing, by one or more capturing/sensing components of a computing device, a scan of a body of a user, and computing one or more primary measurements relating to one or more primary areas of the body, where the one or more primary measurements are computed based on depth data of the one or more primary areas of the body, where the depth data is obtained from the scan. The method may further include receiving at least one of secondary measurements and a three-dimensional (3D) avatar of the body based on the primary measurements, and preparing a report including body dimensions of the body based on at least one of the secondary measurements and the 3D avatar, and presenting the report at a display device.
Abstract:
An augmented reality (AR) device includes a 3D video camera to capture video images and corresponding depth information, a display device to display the video data, and an AR module to add a virtual 3D model to the displayed video data. A depth mapping module generates a 3D map based on the depth information, a dynamic scene recognition and tracking module processes the video images and the 3D map to detect and track a target object within a field of view of the 3D video camera, and an augmented video rendering module renders an augmented video of the virtual 3D model dynamically interacting with the target object. The augmented video is displayed on the display device in real time. The AR device may further include a context module to select the virtual 3D model based on context data comprising a current location of the augmented reality device.
Abstract:
Apparatuses, methods, and storage media for modifying augmented reality in response to user interaction are described. In one instance, the apparatus for modifying augmented reality may include a processor, a scene capture camera coupled with the processor to capture a physical scene, and an augmentation management module to be operated by the processor. The augmentation management module may obtain and analyze the physical scene, generate one or more virtual articles to augment a rendering of the physical scene based on a result of the analysis, track user interaction with the rendered augmented scene, and modify or complement the virtual articles in response to the tracked user interaction. Other embodiments may be described and claimed.
Abstract:
Disclosed in some examples are methods systems and machine readable mediums in which actions or states of a first user (e.g., natural interactions) having a first corresponding computing device are observed by a sensor on a second computing device corresponding to a second user. A notification describing the observed actions or states of a first user may be shared across a network with the first corresponding computing device. In this way, the first computing device may be provided with information concerning the state of the user without having to directly sense the user.
Abstract:
Apparatuses, methods, and storage media for modifying augmented reality in response to user interaction are described. In one instance, the apparatus for modifying augmented reality may include a processor, a scene capture camera coupled with the processor to capture a physical scene, and an augmentation management module to be operated by the processor. The augmentation management module may obtain and analyze the physical scene, generate one or more virtual articles to augment a rendering of the physical scene based on a result of the analysis, track user interaction with the rendered augmented scene, and modify or complement the virtual articles in response to the tracked user interaction. Other embodiments may be described and claimed.
Abstract:
Computer-readable storage media, computing devices and methods are discussed herein. In embodiments, a computing device may include one or more display devices, a digital content module coupled with the one or more display devices, and an augmentation module coupled with the digital content module and the one or more display devices. The digital content module may be configured to cause a portion of textual content to be rendered on the one or more display devices. The textual content may be associated with a digital scene that may be utilized to augment the textual content. The augmentation module may be configured to dynamically adapt the digital scene, based at least in part on a real-time video feed, to be rendered on the one or more display devices to augment the textual content. Other embodiments may be described and/or claimed.
Abstract:
Computer-readable storage media, computing devices, and methods associated with an adaptive learning environment associated with an adaptive learning environment are disclosed. In embodiments, a computing device may include an instruction module and an adaptation module operatively coupled with the instruction module. The instruction module may selectively provide instructional content of one of a plurality of instructional content types to a user of the computing device via one or more output devices coupled with the computing device. The adaptation module may determine, in real-time, an engagement level associated with the user of the computing device and may cooperate with the instruction module to dynamically adapt the instructional content provided to the user based at least in part on the engagement level determined. Other embodiments may be described and/or claimed.