-
公开(公告)号:US20240061713A1
公开(公告)日:2024-02-22
申请号:US18500070
申请日:2023-11-01
Applicant: Microsoft Technology Licensing, LLC
Inventor: Pradeep Sindhu , Jean-Marc Frailong , Wael Noureddine , Felix A. Marti , Deepak Goel , Rajan Goyal , Bertrand Serlet
IPC: G06F9/50 , G06F15/173
CPC classification number: G06F9/5027 , G06F15/17337
Abstract: A new processing architecture is described that utilizes a data processing unit (DPU). Unlike conventional compute models that are centered around a central processing unit (CPU), the DPU that is designed for a data-centric computing model in which the data processing tasks are centered around the DPU. The DPU may be viewed as a highly programmable, high-performance I/O and data-processing hub designed to aggregate and process network and storage I/O to and from other devices. The DPU comprises a network interface to connect to a network, one or more host interfaces to connect to one or more application processors or storage devices, and a multi-core processor with two or more processing cores executing a run-to-completion data plane operating system and one or more processing cores executing a multi-tasking control plane operating system. The data plane operating system is configured to support software functions for performing the data processing tasks.
-
公开(公告)号:US12120016B2
公开(公告)日:2024-10-15
申请号:US18173841
申请日:2023-02-24
Applicant: Microsoft Technology Licensing, LLC
Inventor: PRadeep Sindhu , Deepak Goel , Jean-Marc Frailong , Srihari Raju Vegesna , Wael Noureddine , Philip A. Thomas , Satish Deo , Sunil Mekad , Ayaskant Pani
IPC: H04L45/24 , H04L12/46 , H04L47/125 , H04L47/34 , H04L45/00 , H04L45/64 , H04L49/00 , H04L49/10 , H04L49/15 , H04L49/1515 , H04L49/25 , H04Q11/00
CPC classification number: H04L45/24 , H04L12/4633 , H04L47/125 , H04L47/34 , H04L45/22 , H04L45/64 , H04L49/10 , H04L49/1515 , H04L49/1584 , H04L49/25 , H04L49/70 , H04Q11/00
Abstract: A network system for a data center. In one example, a method comprises establishing, by a plurality of access nodes, a logical tunnel over a plurality of data paths across a switch fabric between a source access node and a destination access node included within the plurality of access nodes, wherein the source access node is coupled to a source network device; and spraying, by the source access node, a data flow of packets over the logical tunnel to the destination access node, wherein the source access node receives the data flow of packets from the source network device, and wherein spraying the data flow of packets includes directing each of the packets within the data flow to one of the data paths based on an amount of data previously transmitted on each of the plurality of data paths.
-
3.
公开(公告)号:US20240250919A1
公开(公告)日:2024-07-25
申请号:US18595195
申请日:2024-03-04
Applicant: Microsoft Technology Licensing, LLC
Inventor: Deepak Goel , Narendra Jayawant Gathoo , Philip A. Thomas , Srihari Raju Vegesna , Pradeep Sindhu , Wael Noureddine , Robert William Bowdidge , Ayaskant Pani , Gopesh Goyal
CPC classification number: H04L49/25 , H04L12/4633 , H04L45/22 , H04L47/34 , H04L47/41 , H04L67/10 , H04L69/16
Abstract: A fabric control protocol is described for use within a data center in which a switch fabric provides full mesh interconnectivity such that any of the servers may communicate packet data for a given packet flow to any other of the servers using any of a number of parallel data paths within the data center switch fabric. The fabric control protocol enables spraying of individual packets for a given packet flow across some or all of the multiple parallel data paths in the data center switch fabric and, optionally, reordering of the packets for delivery to the destination. The fabric control protocol may provide end-to-end bandwidth scaling and flow fairness within a single tunnel based on endpoint-controlled requests and grants for flows. In some examples, the fabric control protocol packet structure is carried over an underlying protocol, such as the User Datagram Protocol (UDP).
-
公开(公告)号:US20230388222A1
公开(公告)日:2023-11-30
申请号:US18446876
申请日:2023-08-09
Applicant: Microsoft Technology Licensing, LLC
Inventor: Pradeep Sindhu , Deepak Goel , Jean-Marc Frailong , Srihari Raju Vegesna , Wael Noureddine , Philip A. Thomas , Satish Deo , Sunil Mekad , Ayaskant Pani
IPC: H04L45/24 , H04L47/125 , H04L12/46 , H04L47/34
CPC classification number: H04L45/24 , H04L47/125 , H04L12/4633 , H04L47/34 , H04L49/1515
Abstract: A network system for a data center is described in which an access node sprays a data flow of packets over a logical tunnel to another access node. In one example, a method comprises establishing, by a plurality of access nodes, a logical tunnel over a plurality of data paths across a switch fabric between a source access node and a destination access node included within the plurality of access nodes, wherein the source access node is coupled to a source network device; and spraying, by the source access node, a data flow of packets over the logical tunnel to the destination access node, wherein the source access node receives the data flow of packets from the source network device, and wherein spraying the data flow of packets includes directing each of the packets within the data flow to a least loaded data path.
-
公开(公告)号:US11809321B2
公开(公告)日:2023-11-07
申请号:US17806419
申请日:2022-06-10
Applicant: Microsoft Technology Licensing, LLC
Inventor: Wael Noureddine , Jean-Marc Frailong , Pradeep Sindhu , Bertrand Serlet
IPC: G06F12/0815 , G06F12/0804 , G06F15/173
CPC classification number: G06F12/0815 , G06F12/0804 , G06F15/17325 , G06F2212/1016 , G06F2212/1032
Abstract: Methods and apparatus for memory management are described. In one example, this disclosure describes a method that includes executing, by a first processing unit, first work unit operations specified by a first work unit message, wherein execution of the first work unit operations includes accessing data from shared memory included within the computing system, modifying the data, and storing the modified data in a first cache associated with the first processing unit; identifying, by the computing system, a second work unit message that specifies second work unit operations that access the shared memory; updating, by the computing system, the shared memory by storing the modified data in the shared memory; receiving, by the computing system, an indication that updating the shared memory with the modified data is complete; and enabling the second processing unit to execute the second work unit operations.
-
公开(公告)号:US11777839B2
公开(公告)日:2023-10-03
申请号:US16901991
申请日:2020-06-15
Applicant: Microsoft Technology Licensing, LLC
Inventor: Pradeep Sindhu , Deepak Goel , Jean-Marc Frailong , Srihari Raju Vegesna , Wael Noureddine , Philip A. Thomas , Satish Deo , Sunil Mekad , Ayaskant Pani
IPC: H04L45/24 , H04L47/125 , H04L12/46 , H04L47/34 , H04L49/1515 , H04L49/25 , H04L45/00 , H04L49/15 , H04L49/10 , H04L45/64 , H04Q11/00 , H04L49/00
CPC classification number: H04L45/24 , H04L12/4633 , H04L47/125 , H04L47/34 , H04L45/22 , H04L45/64 , H04L49/10 , H04L49/1515 , H04L49/1584 , H04L49/25 , H04L49/70 , H04Q11/00
Abstract: A network system for a data center is described in which an access node sprays a data flow of packets over a logical tunnel to another access node. In one example, a method comprises establishing, by a plurality of access nodes, a logical tunnel over a plurality of data paths across a switch fabric between a source access node and a destination access node included within the plurality of access nodes, wherein the source access node is coupled to a source network device; and spraying, by the source access node, a data flow of packets over the logical tunnel to the destination access node, wherein the source access node receives the data flow of packets from the source network device, and wherein spraying the data flow of packets includes directing each of the packets within the data flow to a least loaded data path.
-
公开(公告)号:US12231353B2
公开(公告)日:2025-02-18
申请号:US16774941
申请日:2020-01-28
Applicant: Microsoft Technology Licensing, LLC
Inventor: Deepak Goel , Narendra Jayawant Gathoo , Philip A Thomas , Srihari Raju Vegesna , Pradeep Sindhu , Wael Noureddine , Robert William Bowdidge , Ayaskant Pani , Gopesh Goyal
Abstract: A fabric control protocol is described for use within a data center in which a switch fabric provides full mesh interconnectivity such that any of the servers may communicate packet data for a given packet flow to any other of the servers using any of a number of parallel data paths within the data center switch fabric. The fabric control protocol enables spraying of individual packets for a given packet flow across some or all of the multiple parallel data paths in the data center switch fabric and, optionally, reordering of the packets for delivery to the destination. The fabric control protocol may provide end-to-end bandwidth scaling and flow fairness within a single tunnel based on endpoint-controlled requests and grants for flows. In some examples, the fabric control protocol packet structure is carried over an underlying protocol, such as the User Datagram Protocol (UDP).
-
公开(公告)号:US12143296B2
公开(公告)日:2024-11-12
申请号:US18446876
申请日:2023-08-09
Applicant: Microsoft Technology Licensing, LLC
Inventor: Pradeep Sindhu , Deepak Goel , Jean-Marc Frailong , Srihari Raju Vegesna , Wael Noureddine , Philip A. Thomas , Satish Deo , Sunil Mekad , Ayaskant Pani
IPC: H04L45/24 , H04L12/46 , H04L47/125 , H04L47/34 , H04L45/00 , H04L45/64 , H04L49/00 , H04L49/10 , H04L49/15 , H04L49/1515 , H04L49/25 , H04Q11/00
Abstract: A network system for a data center is described in which an access node sprays a data flow of packets over a logical tunnel to another access node. In one example, a method comprises establishing, by a plurality of access nodes, a logical tunnel over a plurality of data paths across a switch fabric between a source access node and a destination access node included within the plurality of access nodes, wherein the source access node is coupled to a source network device; and spraying, by the source access node, a data flow of packets over the logical tunnel to the destination access node, wherein the source access node receives the data flow of packets from the source network device, and wherein spraying the data flow of packets includes directing each of the packets within the data flow to a least loaded data path.
-
公开(公告)号:US11842216B2
公开(公告)日:2023-12-12
申请号:US16939617
申请日:2020-07-27
Applicant: Microsoft Technology Licensing, LLC
Inventor: Pradeep Sindhu , Jean-Marc Frailong , Wael Noureddine , Felix A. Marti , Deepak Goel , Rajan Goyal , Bertrand Serlet
IPC: G06F9/50 , G06F15/173
CPC classification number: G06F9/5027 , G06F15/17337
Abstract: A new processing architecture is described that utilizes a data processing unit (DPU). Unlike conventional compute models that are centered around a central processing unit (CPU), the DPU that is designed for a data-centric computing model in which the data processing tasks are centered around the DPU. The DPU may be viewed as a highly programmable, high-performance I/O and data-processing hub designed to aggregate and process network and storage I/O to and from other devices. The DPU comprises a network interface to connect to a network, one or more host interfaces to connect to one or more application processors or storage devices, and a multi-core processor with two or more processing cores executing a run-to-completion data plane operating system and one or more processing cores executing a multi-tasking control plane operating system. The data plane operating system is configured to support software functions for performing the data processing tasks.
-
公开(公告)号:US11829295B2
公开(公告)日:2023-11-28
申请号:US18175362
申请日:2023-02-27
Applicant: Microsoft Technology Licensing, LLC
Inventor: Wael Noureddine , Jean-Marc Frailong , Felix A. Marti , Charles Edward Gray , Paul Kim
IPC: G06F12/08 , G06F12/0862 , G06F12/0891 , G06F12/0804 , G06F12/0855
CPC classification number: G06F12/0862 , G06F12/0804 , G06F12/0855 , G06F12/0891 , G06F2212/154 , G06F2212/6028 , G06F2212/62
Abstract: Techniques are described in which a system having multiple processing units processes a series of work units in a processing pipeline, where some or all of the work units access or manipulate data stored in non-coherent memory. In one example, this disclosure describes a method that includes identifying, prior to completing processing of a first work unit with a processing unit of a processor having multiple processing units, a second work unit that is expected to be processed by the processing unit after the first work unit. The method also includes processing the first work unit, and prefetching, from non-coherent memory, data associated with the second work unit into a second cache segment of the buffer cache, wherein prefetching the data associated with the second work unit occurs concurrently with at least a portion of the processing of the first work unit by the processing unit.
-
-
-
-
-
-
-
-
-