-
公开(公告)号:US20250133491A1
公开(公告)日:2025-04-24
申请号:US18381367
申请日:2023-10-18
Applicant: Google LLC
Inventor: Ananya Simlai , Rittwik Jana , Ian Kenneth Coolidge , Santanu Dasgupta
IPC: H04W52/02
Abstract: Aspects of the disclosure are directed to network optimization of various workload servers running in a distributed cloud platform through closed loop machine learning inferencing performed locally on the workload servers. The workload servers can each be equipped with one or more machine learning accelerators to respectively perform local predictions for the workload servers. In response to the local predictions, attributes of the workload servers can be adjusted automatically for optimizing the network.
-
公开(公告)号:US20250123673A1
公开(公告)日:2025-04-17
申请号:US18379798
申请日:2023-10-13
Applicant: Google LLC
Inventor: Ananya Simlai , Ming Wen , Ian Kenneth Coolidge , Santanu Dasgupta
IPC: G06F1/329
Abstract: The presently disclosed technology provides methods and systems for optimally allocating power among workloads executing on a computer system through use of a power management algorithm. For example, according to the present technology a plurality of CPUs within a server can be divided into multiple groups according to application workloads. Workloads can be distributed to the CPUs as needed by a workload scheduler, and the workload scheduler can provide the CPU IDs to a power manager, enabling the power manager to optimize power settings. Each group of CPUs can be assigned an optimal power profile tailored to its respective situation.
-
3.
公开(公告)号:US20230418775A1
公开(公告)日:2023-12-28
申请号:US18213028
申请日:2023-06-22
Applicant: Google LLC
Inventor: Santanu Dasgupta , Bok Knun Randolph Chung , Ankur Jain , Prashant Chandra , Bor Chan , Durgaprasad V. Ayyadevara , Ian Kenneth Coolidge , Muzammil Mueen Butt
CPC classification number: G06F13/385 , G06F13/28 , G06F2213/0038 , G06F2213/3808 , G06F2213/0026
Abstract: The present disclosure provides for a converged compute platform architecture, including a first infrastructure processing unit (IPU)-only configuration and a second configuration wherein the IPU is coupled to a central processing unit, such as an x86 processor. Connectivity between the two configurations may be accomplished with a PCIe switch, or the two configurations may communicate through remote direct memory access (RDMA) techniques. Both configurations may use ML acceleration through a single converged architecture.
-
公开(公告)号:US12259841B2
公开(公告)日:2025-03-25
申请号:US18237171
申请日:2023-08-23
Applicant: Google LLC
Inventor: Ian Kenneth Coolidge , Shahin Valoth
IPC: G06F13/42
Abstract: The present disclosure provides for an architecture for a multi-interface card environment, such as a server that includes multiple network interface cards (NICs) or peripheral component interconnect express (PCIe) cards. The architecture includes a passive optical splitter coupled between a leader clock and the multiple interface cards or PCIes. The optical splitter can be used to distribute clock time from the leader clock to the interface cards. The architecture provides for distribution of timing in a scalable manner in the multi-NIC environments for cloud deployments.
-
公开(公告)号:US20250068581A1
公开(公告)日:2025-02-27
申请号:US18237171
申请日:2023-08-23
Applicant: Google LLC
Inventor: Ian Kenneth Coolidge , Shahin Valoth
IPC: G06F13/42
Abstract: The present disclosure provides for an architecture for a multi-interface card environment, such as a server that includes multiple network interface cards (NICs) or peripheral component interconnect express (PCIe) cards. The architecture includes a passive optical splitter coupled between a leader clock and the multiple interface cards or PCIes. The optical splitter can be used to distribute clock time from the leader clock to the interface cards. The architecture provides for distribution of timing in a scalable manner in the multi-NIC environments for cloud deployments.
-
-
-
-