Abstract:
A mechanism is described for facilitating embedding of human labeler influences in machine learning interfaces in computing environments, according to one embodiment. A method of embodiments, as described herein, includes detecting sensor data via one or more sensors of a computing device, and accessing human labeler data at one or more databases coupled to the computing device. The method may further include evaluating relevance between the sensor data and the human labeler data, where the relevance identifies meaning of the sensor data based on human behavior corresponding to the human labeler data, and associating, based on the relevance, human labeler data with the sensor data to classify the sensor data as labeled data. The method may further include training, based on the labeled data, a machine learning model to extract human influences from the labeled data, and embed one or more of the human influences in one or more environments representing one or more physical scenarios involving one or more humans.
Abstract:
A system and method for measuring and normalizing physical resistance for athletic activities and fitness equipment are disclosed. A particular embodiment includes: measuring a level of physical resistance in an athletic activity; generating sensor data indicative of the measured level of physical resistance; using the sensor data to determine if the measured level of physical resistance will achieve a desired performance level in the athletic activity; and automatically generating control signals to adjust the level of physical resistance if the measured level of physical resistance is unlikely to achieve the desired performance level in the athletic activity.
Abstract:
Techniques to project an image from a wearable computing device are provided. A wearable computing device including a projector configured to project an image into a user field of view based on output from one or more sensors and/or images captured by a camera. The wearable computing device can also include a touch input device. The wearable computing device can project an image responsive to a users touch based on signals received from the touch input device.
Abstract:
Disclosed in some examples are methods, machine readable mediums, and systems for automatic activation of pharmaceutical agents using wearable devices in response to detecting one or more contexts of the user which indicate the need for pharmaceuticals. In some examples, a wearable device may emit signals to automatically release or activate drugs that are already in a user in response to a particular context of the user. For example, if the user begins vigorous exercise, the system may activate a pain medication that was already previously ingested by the user to alleviate anticipated joint pain.
Abstract:
Embodiments of apparatus and methods for capturing and generating user experiences are described. In embodiments, an apparatus may include a processor. The apparatus may also include a data storage module, coupled with the processor, to store sensor data collected by a plurality of sensors attached to one or more devices. The apparatus may further include an experience correlation module, coupled with the data storage module, to associate at least a portion of the sensor data with a user experience based at least in part on one or more rules identifying the user experience, to enable regenerating at least a part of the user experience for a user based at least in part on the portion of the sensor data. Other embodiments may be described and/or claimed.
Abstract:
Technologies for depth-based gesture control include a computing device having a display and a depth sensor. The computing device is configured to recognize an input gesture performed by a user, determine a depth relative to the display of the input gesture based on data from the depth sensor, assign a depth plane to the input gesture as a function of the depth, and execute a user interface command based on the input gesture and the assigned depth plane. The user interface command may control a virtual object selected by depth plane, including a player character in a game. The computing device may recognize primary and secondary virtual touch planes and execute a secondary user interface command for input gestures on the secondary virtual touch plane, such as magnifying or selecting user interface elements or enabling additional functionality based on the input gesture. Other embodiments are described and claimed.
Abstract:
Methods and apparatus to produce augmented reality representations across multiple devices are described. In one example, operation include generating a virtual object, generating a reality space including a first display, and presenting the virtual object in the reality space including the first display on a second display. Further operations include tracking a location of the virtual object in the reality space as the virtual object moves through the reality space, updating the presentation of the virtual object on the second display using the tracked location, and presenting the virtual object on the first display when the tracked location of the virtual object coincides with the location of the first display in the reality space.