EMBEDDING HUMAN LABELER INFLUENCES IN MACHINE LEARNING INTERFACES IN COMPUTING ENVIRONMENTS

    公开(公告)号:EP3629243A1

    公开(公告)日:2020-04-01

    申请号:EP19182743.5

    申请日:2019-06-26

    Abstract: A mechanism is described for facilitating embedding of human labeler influences in machine learning interfaces in computing environments, according to one embodiment. A method of embodiments, as described herein, includes detecting sensor data via one or more sensors of a computing device, and accessing human labeler data at one or more databases coupled to the computing device. The method may further include evaluating relevance between the sensor data and the human labeler data, where the relevance identifies meaning of the sensor data based on human behavior corresponding to the human labeler data, and associating, based on the relevance, human labeler data with the sensor data to classify the sensor data as labeled data. The method may further include training, based on the labeled data, a machine learning model to extract human influences from the labeled data, and embed one or more of the human influences in one or more environments representing one or more physical scenarios involving one or more humans.

    WEARABLE MEDIATED REALITY SYSTEM AND METHOD
    5.
    发明公开
    WEARABLE MEDIATED REALITY SYSTEM AND METHOD 审中-公开
    可穿戴媒体现实系统和方法

    公开(公告)号:EP3198860A1

    公开(公告)日:2017-08-02

    申请号:EP15844039.6

    申请日:2015-08-21

    Abstract: Techniques to project an image from a wearable computing device are provided. A wearable computing device including a projector configured to project an image into a user field of view based on output from one or more sensors and/or images captured by a camera. The wearable computing device can also include a touch input device. The wearable computing device can project an image responsive to a users touch based on signals received from the touch input device.

    Abstract translation: 提供了从可穿戴计算设备投影图像的技术。 一种包括投影仪的可佩戴计算设备,所述投影仪被配置成基于来自一个或多个传感器的输出和/或由相机捕获的图像将图像投影到用户视野中。 可穿戴计算设备还可以包括触摸输入设备。 可穿戴计算设备可以基于从触摸输入设备接收的信号来响应于用户触摸来投影图像。

    CONTEXTUAL ACTIVATION OF PHARMACEUTICALS THROUGH WEARABLE DEVICES
    6.
    发明公开
    CONTEXTUAL ACTIVATION OF PHARMACEUTICALS THROUGH WEARABLE DEVICES 审中-公开
    通过可穿戴设备的药物的背景活化

    公开(公告)号:EP3197534A1

    公开(公告)日:2017-08-02

    申请号:EP15843991.9

    申请日:2015-09-16

    Abstract: Disclosed in some examples are methods, machine readable mediums, and systems for automatic activation of pharmaceutical agents using wearable devices in response to detecting one or more contexts of the user which indicate the need for pharmaceuticals. In some examples, a wearable device may emit signals to automatically release or activate drugs that are already in a user in response to a particular context of the user. For example, if the user begins vigorous exercise, the system may activate a pain medication that was already previously ingested by the user to alleviate anticipated joint pain.

    Abstract translation: 在一些示例中公开了用于响应于检测到指示对药物的需要的用户的一个或多个情境而使用可穿戴设备来自动激活药剂的方法,机器可读介质和系统。 在一些示例中,可穿戴设备可以发出信号以响应于用户的特定情境而自动释放或激活已经在用户中的药物。 例如,如果用户开始剧烈运动,则该系统可以激活已经由用户先前摄取的止痛药物以减轻预期的关节疼痛。

    APPARATUS AND METHODS FOR CAPTURING AND GENERATING USER EXPERIENCES
    7.
    发明公开
    APPARATUS AND METHODS FOR CAPTURING AND GENERATING USER EXPERIENCES 审中-公开
    装置和方法进行检测和用户体验GENERATION

    公开(公告)号:EP3060999A1

    公开(公告)日:2016-08-31

    申请号:EP13895975.4

    申请日:2013-10-25

    CPC classification number: G06N5/027 G06F1/163 G06F3/011 G06N5/02 G06N7/00 H04W4/80

    Abstract: Embodiments of apparatus and methods for capturing and generating user experiences are described. In embodiments, an apparatus may include a processor. The apparatus may also include a data storage module, coupled with the processor, to store sensor data collected by a plurality of sensors attached to one or more devices. The apparatus may further include an experience correlation module, coupled with the data storage module, to associate at least a portion of the sensor data with a user experience based at least in part on one or more rules identifying the user experience, to enable regenerating at least a part of the user experience for a user based at least in part on the portion of the sensor data. Other embodiments may be described and/or claimed.

    Abstract translation: 以及用于捕获和生成的用户体验装置的实施例的方法进行说明。 在实施方案中,以设备可包括处理器。 因此,该装置可以包括数据存储模块,与所述处理器耦合,以通过连接到一个或多个设备的传感器的传感器收集的多元化存储数据。 该装置可以进一步包括一个经验相关性模块,再加上数据存储模块,至少一个与至少部分地基于标识所述用户体验的一个或多个规则的用户体验的传感器数据的一部分相关联,以使在再生 的用于在所述传感器数据的所述部分至少部分地基于用户的用户体验至少一部分。 其它实施例可以被描述和/或要求保护。

    DEPTH-BASED USER INTERFACE GESTURE CONTROL
    8.
    发明公开
    DEPTH-BASED USER INTERFACE GESTURE CONTROL 审中-公开
    基于深度的用户界面手势控制

    公开(公告)号:EP2972669A1

    公开(公告)日:2016-01-20

    申请号:EP13877670.3

    申请日:2013-03-14

    CPC classification number: G06F3/04883 G06F3/017

    Abstract: Technologies for depth-based gesture control include a computing device having a display and a depth sensor. The computing device is configured to recognize an input gesture performed by a user, determine a depth relative to the display of the input gesture based on data from the depth sensor, assign a depth plane to the input gesture as a function of the depth, and execute a user interface command based on the input gesture and the assigned depth plane. The user interface command may control a virtual object selected by depth plane, including a player character in a game. The computing device may recognize primary and secondary virtual touch planes and execute a secondary user interface command for input gestures on the secondary virtual touch plane, such as magnifying or selecting user interface elements or enabling additional functionality based on the input gesture. Other embodiments are described and claimed.

    Abstract translation: 用于基于深度的手势控制的技术包括具有显示器和深度传感器的计算设备。 计算设备被配置为识别由用户执行的输入手势,基于来自深度传感器的数据确定相对于输入手势的显示的深度,根据深度将深度平面分配给输入手势,以及以及 基于输入手势和分配的深度平面执行用户界面命令。 用户界面命令可以控制由深度平面选择的虚拟对象,包括游戏中的玩家角色。 计算设备可以识别主要和次要虚拟触摸平面并且执行用于在次要虚拟触摸平面上的输入手势的次要用户界面命令,诸如放大或选择用户界面元素或基于输入手势启用附加功能。 描述并要求保护其他实施例。

    AUGMENTED REALITY REPRESENTATIONS ACROSS MULTIPLE DEVICES
    9.
    发明公开
    AUGMENTED REALITY REPRESENTATIONS ACROSS MULTIPLE DEVICES 审中-公开
    多个设备之间扩大的现实的表示

    公开(公告)号:EP2795893A1

    公开(公告)日:2014-10-29

    申请号:EP11878165.7

    申请日:2011-12-20

    CPC classification number: G06F3/1423 G06F3/1454 G06T19/006 G09G5/14

    Abstract: Methods and apparatus to produce augmented reality representations across multiple devices are described. In one example, operation include generating a virtual object, generating a reality space including a first display, and presenting the virtual object in the reality space including the first display on a second display. Further operations include tracking a location of the virtual object in the reality space as the virtual object moves through the reality space, updating the presentation of the virtual object on the second display using the tracked location, and presenting the virtual object on the first display when the tracked location of the virtual object coincides with the location of the first display in the reality space.

Patent Agency Ranking