TOUCHLESS USER INTERFACE NAVIGATION USING GESTURES
    3.
    发明申请
    TOUCHLESS USER INTERFACE NAVIGATION USING GESTURES 有权
    无缝用户界面导航使用手势

    公开(公告)号:US20170003747A1

    公开(公告)日:2017-01-05

    申请号:US14791291

    申请日:2015-07-03

    Applicant: Google Inc.

    Abstract: An example method includes displaying, by a display (104) of a wearable device (100), a content card (114B); receiving, by the wearable device, motion data generated by a motion sensor (102) of the wearable device that represents motion of a forearm of a user of the wearable device; responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is less than an acceleration of the supination, displaying, by the display, a next content card (114C); and responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is greater than an acceleration of the supination, displaying, by the display, a previous content card (114A).

    Abstract translation: 示例性方法包括通过可穿戴设备(100)的显示器(104)显示内容卡(114B); 由所述可穿戴装置接收由所述可穿戴装置的运动传感器(102)产生的代表所述可穿戴装置的使用者的前臂运动的运动数据; 响应于基于运动数据确定用户已经以小于加速度的加速度执行了包括前臂的前倾,后方是前臂的旋转的运动,通过显示来显示, 下一个内容卡(114C); 并且响应于基于所述运动数据确定所述用户已经执行包括所述前臂的仰卧的运动,所述前臂的后倾是以大于所述仰卧位的加速度的加速度的所述前臂的旋转,由所述显示器显示 ,先前的内容卡(114A)。

    IDENTIFYING GESTURES USING MOTION DATA
    4.
    发明申请
    IDENTIFYING GESTURES USING MOTION DATA 有权
    使用运动数据识别姿势

    公开(公告)号:US20160048161A1

    公开(公告)日:2016-02-18

    申请号:US14826437

    申请日:2015-08-14

    Applicant: Google Inc.

    Abstract: In one example, a method includes determining, by a processor (104) of a wearable computing device (102) and based on motion data generated by a motion sensor (106) of the wearable computing device, one or more strokes. In this example, the method also includes generating, by the processor and based on the motion data, a respective attribute vector for each respective stroke from the one or more strokes and classifying, by the processor and based on the respective attribute vector, each respective stroke from the one or more strokes into at least one category. In this example, the method also includes determining, by the processor and based on a gesture library and the at least one category for each stroke from the one or more strokes, a gesture. In this example, the method also includes performing, by the wearable device and based on the gesture, an action.

    Abstract translation: 在一个示例中,方法包括由可佩戴计算设备(102)的处理器(104)确定基于由可穿戴计算设备的运动传感器(106)产生的运动数据,一个或多个笔划。 在该示例中,该方法还包括由处理器和基于运动数据生成来自一个或多个笔划的每个相应笔划的相应属性向量,并由处理器根据相应的属性向量分别分别对应 从一个或多个笔画到至少一个类别的笔画。 在该示例中,该方法还包括由处理器基于手势库以及来自一个或多个笔画的每个笔划的至少一个类别来确定手势。 在该示例中,该方法还包括通过可穿戴设备并基于手势来执行动作。

Patent Agency Ranking