-
公开(公告)号:US20220253488A1
公开(公告)日:2022-08-11
申请号:US17630461
申请日:2019-09-27
Applicant: Intel Corporation
Inventor: Jianhui Li , Yong Wu , Ningxin Hu , Yiqiang Li , Yuanke Luo
IPC: G06F16/954 , G06N3/04 , G06N3/08
Abstract: Methods, apparatus, systems, and articles of manufacture to process a machine learning model in a web-browser environment are disclosed. An example apparatus includes a graph builder to accumulate machine learning operations as a graph. A tensor manager is to, in response to a request to access a tensor that is not yet available and associated with the machine learning operations, identify the graph based on the tensor. A graph cache manager is to determine whether a condensed graph corresponding to the identified graph is available. A graph condenser is to, in response to the graph cache manager determining that the condensed graph is not available, generate the condensed graph. A graph executor is to execute the condensed graph to create the tensor. The tensor manager is to provide the tensor as a response to the request to access the tensor.
-
2.
公开(公告)号:US20210232969A1
公开(公告)日:2021-07-29
申请号:US17059986
申请日:2018-12-24
Applicant: Intel Corporation
Inventor: Ningxin Hu
IPC: G06N20/00
Abstract: Methods, apparatus, systems and articles of manufacture to process a machine learning model in a multi-process web browser environment are disclosed. An example apparatus includes a graph executor to determine a mode of operation for a computation graph to be executed. A central processing unit (CPU) interpreter is to lookup a CPU instruction corresponding to a node of the computation graph, the CPU instruction being a CPU-specific instruction for execution by at least one processor. A graph profiler is to determine whether the computation graph is frequently executed. A graphics processing unit (GPU) compiler interface is to, in response to determining that the computation graph is frequently executed, transmit a request for compilation of at least two nodes of the computation graph into a GPU kernel for execution at a GPU.
-
公开(公告)号:US10207190B2
公开(公告)日:2019-02-19
申请号:US14974671
申请日:2015-12-18
Applicant: Intel Corporation
Inventor: Guangzhen Li , Zhongsong Lin , Chun Gao , Ningxin Hu
IPC: A63F13/77 , G06F21/12 , G06F21/53 , G06F9/455 , G06F17/30 , G06F9/54 , A63F13/30 , A63F13/50 , G06F3/0481 , G06T1/20 , H04L29/08
Abstract: Technologies for web-based game execution include a computing device with a web rendering engine and a native game engine library. The web rendering engine establishes a scripting environment that issues calls to a game engine interface established by the web rendering engine. The scripting environment may be a JavaScript engine. In response to calls to the game engine interface, the game engine interface issues calls to the native game engine library. The native game engine library issues native graphics commands to a graphics bridge of the computing device. The native graphics commands may be OpenGL calls. The graphics bridge translates the native graphics commands to a web graphics context, which renders graphical game content to a web content element of the web rendering engine. The web graphics context may be a WebGL context, and the web content element may be a canvas element. Other embodiments are described and claimed.
-
公开(公告)号:US20230244525A1
公开(公告)日:2023-08-03
申请号:US18160209
申请日:2023-01-26
Applicant: Intel Corporation
Inventor: Ningxin Hu , Feng Dai , Junyong Ding , Junwei Fu , Mohammad Haghighat , Mousumi Hazra , Mingming Xu , Min Zhang
CPC classification number: G06F9/4881 , G06N5/022
Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for an XPU-aware dynamic compute scheduling framework. These improve processing of cloud client application pipelines across XPU devices by incorporating memory, machine readable instructions and processor circuitry to execute the functions of: trace an execution of an input model by a graph tracer; build a compute graph based on the trace of the input model; communicate an operational parameter; create a first XPU device assignment to recommend an XPU device to use based on at least one provisioned policy of a system-wide XPU selection policy provider; update the compute graph based on the first XPU device assignment; and send the first XPU device assignment to the devices through a dispatch command.
-
5.
公开(公告)号:US20220343165A1
公开(公告)日:2022-10-27
申请号:US17764094
申请日:2019-10-29
Applicant: Intel Corporation
Inventor: Ningxin Hu , Mohammad Haghighat , Pinzhen Xu
Abstract: Systems, apparatuses and methods may provide for technology that detects a request by a web application to execute a neural network and dispatch a first portion of the neural network to a first device via a first process. The technology may also dispatch a second portion of the neural network to a second device via a second process, wherein the second portion of the neural network is to include one or more operations that are unsupported by the first device.
-
-
-
-