Deep learning inference efficiency technology with early exit and speculative execution

    公开(公告)号:US11562200B2

    公开(公告)日:2023-01-24

    申请号:US16266880

    申请日:2019-02-04

    Abstract: Systems, apparatuses and methods may provide for technology that processes an inference workload in a first subset of layers of a neural network that prevents or inhibits data dependent branch operations, conducts an exit determination as to whether an output of the first subset of layers satisfies one or more exit criteria, and selectively bypasses processing of the output in a second subset of layers of the neural network based on the exit determination. The technology may also speculatively initiate the processing of the output in the second subset of layers while the exit determination is pending. Additionally, when the inference workloads include a plurality of batches, the technology may mask one or more of the plurality of batches from processing in the second subset of layers.

    Systems and methods for contextually augmented video creation and sharing

    公开(公告)号:US11138796B2

    公开(公告)日:2021-10-05

    申请号:US16546071

    申请日:2019-08-20

    Abstract: An augmented reality (AR) device includes a 3D video camera to capture video images and corresponding depth information, a display device to display the video data, and an AR module to add a virtual 3D model to the displayed video data. A depth mapping module generates a 3D map based on the depth information, a dynamic scene recognition and tracking module processes the video images and the 3D map to detect and track a target object within a field of view of the 3D video camera, and an augmented video rendering module renders an augmented video of the virtual 3D model dynamically interacting with the target object. The augmented video is displayed on the display device in real time. The AR device may further include a context module to select the virtual 3D model based on context data comprising a current location of the augmented reality device.

    Depth-based user interface gesture control
    6.
    发明授权
    Depth-based user interface gesture control 有权
    基于深度的用户界面手势控制

    公开(公告)号:US09389779B2

    公开(公告)日:2016-07-12

    申请号:US13976036

    申请日:2013-03-14

    CPC classification number: G06F3/04883 G06F3/017

    Abstract: Technologies for depth-based gesture control include a computing device having a display and a depth sensor. The computing device is configured to recognize an input gesture performed by a user, determine a depth relative to the display of the input gesture based on data from the depth sensor, assign a depth plane to the input gesture as a function of the depth, and execute a user interface command based on the input gesture and the assigned depth plane. The user interface command may control a virtual object selected by depth plane, including a player character in a game. The computing device may recognize primary and secondary virtual touch planes and execute a secondary user interface command for input gestures on the secondary virtual touch plane, such as magnifying or selecting user interface elements or enabling additional functionality based on the input gesture. Other embodiments are described and claimed.

    Abstract translation: 基于深度手势控制的技术包括具有显示器和深度传感器的计算设备。 计算设备被配置为识别由用户执行的输入手势,基于来自深度传感器的数据确定相对于输入手势的显示的深度,将作为深度的函数的深度平面分配给输入手势,以及 基于输入手势和分配的深度平面执行用户界面命令。 用户界面命令可以控制由深度平面选择的虚拟对象,包括游戏中的玩家角色。 计算设备可以识别主虚拟触摸平面和辅助虚拟触摸平面,并且在次虚拟触摸平面上执行用于输入手势的次用户界面命令,诸如放大或选择用户界面元素或基于输入手势启用附加功能。 描述和要求保护其他实施例。

    Deep learning inference efficiency technology with early exit and speculative execution

    公开(公告)号:US12211260B2

    公开(公告)日:2025-01-28

    申请号:US18519674

    申请日:2023-11-27

    Abstract: Systems, apparatuses and methods may provide for technology that processes an inference workload in a first subset of layers of a neural network that prevents or inhibits data dependent branch operations, conducts an exit determination as to whether an output of the first subset of layers satisfies one or more exit criteria, and selectively bypasses processing of the output in a second subset of layers of the neural network based on the exit determination. The technology may also speculatively initiate the processing of the output in the second subset of layers while the exit determination is pending. Additionally, when the inference workloads include a plurality of batches, the technology may mask one or more of the plurality of batches from processing in the second subset of layers.

    Augmentation of textual content with a digital scene

    公开(公告)号:US10270985B2

    公开(公告)日:2019-04-23

    申请号:US14475747

    申请日:2014-09-03

    Abstract: Computer-readable storage media, computing devices and methods are discussed herein. In embodiments, a computing device may include one or more display devices, a digital content module coupled with the one or more display devices, and an augmentation module coupled with the digital content module and the one or more display devices. The digital content module may be configured to cause a portion of textual content to be rendered on the one or more display devices. The textual content may be associated with a digital scene that may be utilized to augment the textual content. The augmentation module may be configured to dynamically adapt the digital scene, based at least in part on a real-time video feed, to be rendered on the one or more display devices to augment the textual content. Other embodiments may be described and/or claimed.

    SYSTEMS AND METHODS FOR CONTEXTUALLY AUGMENTED VIDEO CREATION AND SHARING

    公开(公告)号:US20180047214A1

    公开(公告)日:2018-02-15

    申请号:US15689274

    申请日:2017-08-29

    CPC classification number: G06T19/006 G06K9/00671

    Abstract: An augmented reality (AR) device includes a 3D video camera to capture video images and corresponding depth information, a display device to display the video data, and an AR module to add a virtual 3D model to the displayed video data. A depth mapping module generates a 3D map based on the depth information, a dynamic scene recognition and tracking module processes the video images and the 3D map to detect and track a target object within a field of view of the 3D video camera, and an augmented video rendering module renders an augmented video of the virtual 3D model dynamically interacting with the target object. The augmented video is displayed on the display device in real time. The AR device may further include a context module to select the virtual 3D model based on context data comprising a current location of the augmented reality device.

Patent Agency Ranking