-
公开(公告)号:US20220382593A1
公开(公告)日:2022-12-01
申请号:US17814895
申请日:2022-07-26
发明人: Diman Zad Tootaghaj , Anu Mercian , Vivek Adarsh , Puneet Sharma
摘要: Example implementations relate to edge acceleration by offloading network dependent applications to a hardware accelerator. According to one embodiment, queries are received at a cluster of a container orchestration platform. The cluster includes a host system and a hardware accelerator, each serving as individual worker machines of the cluster. The cluster further includes multiple worker nodes and a master node executing on the host system or the hardware accelerator. A first worker node executes on the hardware accelerator and runs a first instance of an application. A distribution of the queries is determined among the worker machines based on a queuing model that takes into consideration the respective compute capacities of the worker machines. Responsive to receipt of the queries by the host system or the hardware accelerator, the queries are directed to the master node or one of the worker nodes in accordance with the distribution.
-
公开(公告)号:US11886919B2
公开(公告)日:2024-01-30
申请号:US17814895
申请日:2022-07-26
发明人: Diman Zad Tootaghaj , Anu Mercian , Vivek Adarsh , Puneet Sharma
CPC分类号: G06F9/5027 , G06F9/45558 , G06F9/547 , G06F2009/4557
摘要: Example implementations relate to edge acceleration by offloading network dependent applications to a hardware accelerator. According to one embodiment, queries are received at a cluster of a container orchestration platform. The cluster includes a host system and a hardware accelerator, each serving as individual worker machines of the cluster. The cluster further includes multiple worker nodes and a master node executing on the host system or the hardware accelerator. A first worker node executes on the hardware accelerator and runs a first instance of an application. A distribution of the queries is determined among the worker machines based on a queuing model that takes into consideration the respective compute capacities of the worker machines. Responsive to receipt of the queries by the host system or the hardware accelerator, the queries are directed to the master node or one of the worker nodes in accordance with the distribution.
-
公开(公告)号:US11436054B1
公开(公告)日:2022-09-06
申请号:US17222160
申请日:2021-04-05
发明人: Diman Zad Tootaghaj , Anu Mercian , Vivek Adarsh , Puneet Sharma
摘要: Example implementations relate to edge acceleration by offloading network dependent applications to a hardware accelerator. According to one embodiment, queries are received at a cluster of a container orchestration platform. The cluster includes a host system and a hardware accelerator, each serving as individual worker machines of the cluster. The cluster further includes multiple worker nodes and a master node executing on the host system or the hardware accelerator. A first worker node executes on the hardware accelerator and runs a first instance of an application. A distribution of the queries is determined among the worker machines based on a queuing model that takes into consideration the respective compute capacities of the worker machines. Responsive to receipt of the queries by the host system or the hardware accelerator, the queries are directed to the master node or one of the worker nodes in accordance with the distribution.
-
-