METHODS AND APPARATUS TO PROCESS A MACHINE LEARNING MODEL IN A WEB-BROWSER ENVIRONMENT

    公开(公告)号:US20220253488A1

    公开(公告)日:2022-08-11

    申请号:US17630461

    申请日:2019-09-27

    Abstract: Methods, apparatus, systems, and articles of manufacture to process a machine learning model in a web-browser environment are disclosed. An example apparatus includes a graph builder to accumulate machine learning operations as a graph. A tensor manager is to, in response to a request to access a tensor that is not yet available and associated with the machine learning operations, identify the graph based on the tensor. A graph cache manager is to determine whether a condensed graph corresponding to the identified graph is available. A graph condenser is to, in response to the graph cache manager determining that the condensed graph is not available, generate the condensed graph. A graph executor is to execute the condensed graph to create the tensor. The tensor manager is to provide the tensor as a response to the request to access the tensor.

    METHODS AND APPARATUS TO PROCESS A MACHINE LEARNING MODEL IN A MULTI-PROCESS WEB BROWSER ENVIRONMENT

    公开(公告)号:US20210232969A1

    公开(公告)日:2021-07-29

    申请号:US17059986

    申请日:2018-12-24

    Inventor: Ningxin Hu

    Abstract: Methods, apparatus, systems and articles of manufacture to process a machine learning model in a multi-process web browser environment are disclosed. An example apparatus includes a graph executor to determine a mode of operation for a computation graph to be executed. A central processing unit (CPU) interpreter is to lookup a CPU instruction corresponding to a node of the computation graph, the CPU instruction being a CPU-specific instruction for execution by at least one processor. A graph profiler is to determine whether the computation graph is frequently executed. A graphics processing unit (GPU) compiler interface is to, in response to determining that the computation graph is frequently executed, transmit a request for compilation of at least two nodes of the computation graph into a GPU kernel for execution at a GPU.

    Technologies for native game experience in web rendering engine

    公开(公告)号:US10207190B2

    公开(公告)日:2019-02-19

    申请号:US14974671

    申请日:2015-12-18

    Abstract: Technologies for web-based game execution include a computing device with a web rendering engine and a native game engine library. The web rendering engine establishes a scripting environment that issues calls to a game engine interface established by the web rendering engine. The scripting environment may be a JavaScript engine. In response to calls to the game engine interface, the game engine interface issues calls to the native game engine library. The native game engine library issues native graphics commands to a graphics bridge of the computing device. The native graphics commands may be OpenGL calls. The graphics bridge translates the native graphics commands to a web graphics context, which renders graphical game content to a web content element of the web rendering engine. The web graphics context may be a WebGL context, and the web content element may be a canvas element. Other embodiments are described and claimed.

    METHODS AND APPARATUS FOR AN XPU-AWARE DYNAMIC COMPUTE SCHEDULING FRAMEWORK

    公开(公告)号:US20230244525A1

    公开(公告)日:2023-08-03

    申请号:US18160209

    申请日:2023-01-26

    CPC classification number: G06F9/4881 G06N5/022

    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for an XPU-aware dynamic compute scheduling framework. These improve processing of cloud client application pipelines across XPU devices by incorporating memory, machine readable instructions and processor circuitry to execute the functions of: trace an execution of an input model by a graph tracer; build a compute graph based on the trace of the input model; communicate an operational parameter; create a first XPU device assignment to recommend an XPU device to use based on at least one provisioned policy of a system-wide XPU selection policy provider; update the compute graph based on the first XPU device assignment; and send the first XPU device assignment to the devices through a dispatch command.

Patent Agency Ranking