-
公开(公告)号:US20220014459A1
公开(公告)日:2022-01-13
申请号:US17486579
申请日:2021-09-27
Applicant: Intel Corporation
Inventor: Mrittika Ganguli , Anjali Jain , Reshma Lal , Edwin Verplanke , Priya Autee , Chih-Jen Chang , Abhirupa Layek , Nupur Jain
IPC: H04L12/751 , H04L12/715 , G06F13/28 , G06F13/16
Abstract: Examples described herein relate to network layer 7 (L7) offload to an infrastructure processing unit (IPU) for a service mesh. An apparatus described herein includes an IPU comprising an IPU memory to store a routing table for a service mesh, the routing table to map shared memory address spaces of the IPU and a host device executing one or more microservices, wherein the service mesh provides an infrastructure layer for the one or more microservices executing on the host device; and one or more IPU cores communicably coupled to the IPU memory, the one or more IPU cores to: host a network L7 proxy endpoint for the service mesh, and communicate messages between the network L7 proxy endpoint and an L7 interface device of the one or more microservices by copying data between the shared memory address spaces of the IPU and the host device based on the routing table.
-
公开(公告)号:US20190044859A1
公开(公告)日:2019-02-07
申请号:US15859387
申请日:2017-12-30
Applicant: Intel Corporation
Inventor: Naru Sundar , Chih-Jen Chang , Robert Southworth , Hsi-Cheng Chu
IPC: H04L12/743 , G06F17/30
Abstract: Technologies for managing exact match hash table growth include a network computing device which includes a compute engine and a network interface controller (NIC). The NIC is configured to allocate a plurality of physical bucket addresses in non-contiguous chunks of memory of the compute engine, configure a bucket threshold value as a function of a hash size of the hash table, generate a plurality of virtual bucket addresses as a function of the bucket threshold value, and map each generated virtual bucket address to an allocated physical bucket address. Other embodiments are described herein.
-
公开(公告)号:US20190044867A1
公开(公告)日:2019-02-07
申请号:US15942023
申请日:2018-03-30
Applicant: Intel Corporation
Inventor: Chih-Jen Chang , Robert Southworth , Naru Dames Sundar , Yue Yang , Charles Michael Atkin , John Leshchuk
IPC: H04L12/819 , H04L12/26 , H04L12/813
Abstract: Technologies for controlling jitter at network packet egress at a source computing device include determining a switch time delta as a difference between a present switch time and a previously captured switch time upon receipt of a network packet scheduled for transmission to a target computing device and determining a host scheduler time delta as a difference between a host scheduler timestamp associated with the received network packet and a previously captured host scheduler timestamp. The source computing device is additionally configured to determine an amount of previously captured tokens present in a token bucket, determine whether there are a sufficient number of tokens available in the token bucket to transmit the received packet as a function of the switch time delta, the host scheduler time delta, and the amount of previously captured tokens present in the token bucket, and schedule the received network packet for transmission upon a determination that sufficient tokens in the token bucket.
-
公开(公告)号:US10812402B2
公开(公告)日:2020-10-20
申请号:US16236127
申请日:2018-12-28
Applicant: INTEL CORPORATION
Inventor: Robert Southworth , Ben-Zion Friedman , Robert Munoz , Sarig Livne , Chih-Jen Chang , Yue Yang , Partick Fleming
IPC: H04L12/841 , H04L12/26 , H04L12/927 , H04L12/819 , H04L12/815 , H04L12/813 , H04J3/06
Abstract: Apparatuses and methods for managing jitter resulting from processing through a network interface pipeline are disclosed. In embodiments, a network traffic scheduler annotates packets to be transmitted over a bandwidth-limited network connection with time relationship information to ensure downstream bandwidth limitations are not violated. Following processing through a network interface pipeline, a jitter shaper inspects the annotated time relationship information and pipeline-imposed delays and, by imposing a variable delay, reestablishes bandwidth-complaint time relationships based upon the annotated time relationship information and configured tolerances.
-
公开(公告)号:US20190042456A1
公开(公告)日:2019-02-07
申请号:US16021319
申请日:2018-06-28
Applicant: Intel Corporation
Inventor: Yakov Evgeni Ginzburg , Naru Dames Sundar , Chih-Jen Chang , Amir Keren , Ravi Tangirala
IPC: G06F12/0893 , G06F12/084 , G06F12/1018 , G06F9/455
Abstract: There is disclosed in one example a computing system, including: a processor including one or more computing cores; a cache having n discrete cache banks of the same cache level; and a cache controller including n discrete cache buses to communicatively couple the cache controller to the cache, wherein the cache buses are of width b, and a cache access controller configured to: receive an access request for an object of size s, wherein s>b; divide the object into k chunks of size b or smaller; and transfer the object to or from the cache in one or more iterations, the iterations including transferring n chunks of size b or smaller in parallel via the cache buses.
-
公开(公告)号:US11687264B2
公开(公告)日:2023-06-27
申请号:US15721053
申请日:2017-09-29
Applicant: Intel Corporation
Inventor: Chih-Jen Chang , Brad Burres , Jose Niell , Dan Biederman , Robert Cone , Pat Wang , Kenneth Keels , Patrick Fleming
IPC: H04L67/63 , G06F3/06 , G06F16/174 , G06F21/57 , G06F21/73 , G06F8/65 , H04L41/0816 , H04L41/0853 , H04L41/12 , H04L67/10 , G06F11/30 , G06F9/50 , H01R13/453 , G06F9/48 , G06F9/455 , H05K7/14 , H04L61/5007 , H04L67/75 , H03M7/30 , H03M7/40 , H04L43/08 , H04L47/20 , H04L47/2441 , G06F11/07 , G06F11/34 , G06F7/06 , G06T9/00 , H03M7/42 , H04L12/28 , H04L12/46 , G06F13/16 , G06F21/62 , G06F21/76 , H03K19/173 , H04L9/08 , H04L41/044 , H04L49/104 , H04L43/04 , H04L43/06 , H04L43/0894 , G06F9/38 , G06F12/02 , G06F12/06 , G06T1/20 , G06T1/60 , G06F9/54 , H04L67/1014 , G06F8/656 , G06F8/658 , G06F8/654 , G06F9/4401 , H01R13/631 , H04L47/78 , G06F16/28 , H04Q11/00 , G06F11/14 , H04L41/046 , H04L41/0896 , H04L41/142 , H04L9/40 , G06F15/80
CPC classification number: G06F3/0641 , G06F3/0604 , G06F3/065 , G06F3/067 , G06F3/0608 , G06F3/0611 , G06F3/0613 , G06F3/0617 , G06F3/0647 , G06F3/0653 , G06F7/06 , G06F8/65 , G06F8/654 , G06F8/656 , G06F8/658 , G06F9/3851 , G06F9/3891 , G06F9/4401 , G06F9/45533 , G06F9/4843 , G06F9/4881 , G06F9/5005 , G06F9/505 , G06F9/5038 , G06F9/5044 , G06F9/5083 , G06F9/544 , G06F11/0709 , G06F11/079 , G06F11/0751 , G06F11/3006 , G06F11/3034 , G06F11/3055 , G06F11/3079 , G06F11/3409 , G06F12/0284 , G06F12/0692 , G06F13/1652 , G06F16/1744 , G06F21/57 , G06F21/6218 , G06F21/73 , G06F21/76 , G06T1/20 , G06T1/60 , G06T9/005 , H01R13/453 , H01R13/4536 , H01R13/4538 , H01R13/631 , H03K19/1731 , H03M7/3084 , H03M7/40 , H03M7/42 , H03M7/60 , H03M7/6011 , H03M7/6017 , H03M7/6029 , H04L9/0822 , H04L12/2881 , H04L12/4633 , H04L41/044 , H04L41/0816 , H04L41/0853 , H04L41/12 , H04L43/04 , H04L43/06 , H04L43/08 , H04L43/0894 , H04L47/20 , H04L47/2441 , H04L49/104 , H04L61/5007 , H04L67/10 , H04L67/1014 , H04L67/63 , H04L67/75 , H05K7/1452 , H05K7/1487 , H05K7/1491 , G06F11/1453 , G06F12/023 , G06F15/80 , G06F16/285 , G06F2212/401 , G06F2212/402 , G06F2221/2107 , H04L41/046 , H04L41/0896 , H04L41/142 , H04L47/78 , H04L63/1425 , H04Q11/0005 , H05K7/1447 , H05K7/1492
Abstract: Technologies for an accelerator interface over Ethernet are disclosed. In the illustrative embodiment, a network interface controller of a compute device may receive a data packet. If the network interface controller determines that the data packet should be pre-processed (e.g., decrypted) with a remote accelerator device, the network interface controller may encapsulate the data packet in an encapsulating network packet and send the encapsulating network packet to a remote accelerator device on a remote compute device. The remote accelerator device may pre-process the data packet (e.g., decrypt the data packet) and send it back to the network interface controller. The network interface controller may then send the pre-processed packet to a processor of the compute device.
-
公开(公告)号:US20180152366A1
公开(公告)日:2018-05-31
申请号:US15721817
申请日:2017-09-30
Applicant: Intel Corporation
Inventor: Linden Cornett , Chih-Jen Chang , Manasi Deval , Parthasarathy Sarangam , Naru D. Sundar , Padma Akkiraju , Alexander Nguyen
IPC: H04L12/26
Abstract: Technologies for managing network statistic counters include a network interface controller (NIC) of a computing device configured to identify a statistic counter of and a software consumer associated with a received network packet and identify an active counter page as a function of the identified software consumer. The NIC is further configured to read a value of the statistic counter stored at a counter memory address of a corresponding counter identifier entry of the identified active counter page, increment a read value of the statistic counter, and write the incremented value of the statistic counter back to the counter memory address. Additionally, in response to detecting a notification triggering event, generating a notification message that includes a present value of the statistic counter and a present value of each of the other statistic counters of the active counter page, and transmit the generated notification message to the software consumer. Other embodiments are described herein.
-
公开(公告)号:US12292842B2
公开(公告)日:2025-05-06
申请号:US17486579
申请日:2021-09-27
Applicant: Intel Corporation
Inventor: Mrittika Ganguli , Anjali Jain , Reshma Lal , Edwin Verplanke , Priya Autee , Chih-Jen Chang , Abhirupa Layek , Nupur Jain
IPC: G06F13/38 , G06F13/16 , G06F13/28 , H04L45/02 , H04L45/64 , H04L67/289 , H04L69/321
Abstract: Examples described herein relate to network layer 7 (L7) offload to an infrastructure processing unit (IPU) for a service mesh. An apparatus described herein includes an IPU comprising an IPU memory to store a routing table for a service mesh, the routing table to map shared memory address spaces of the IPU and a host device executing one or more microservices, wherein the service mesh provides an infrastructure layer for the one or more microservices executing on the host device; and one or more IPU cores communicably coupled to the IPU memory, the one or more IPU cores to: host a network L7 proxy endpoint for the service mesh, and communicate messages between the network L7 proxy endpoint and an L7 interface device of the one or more microservices by copying data between the shared memory address spaces of the IPU and the host device based on the routing table.
-
公开(公告)号:US12160369B2
公开(公告)日:2024-12-03
申请号:US16276979
申请日:2019-02-15
Applicant: Intel Corporation
Inventor: Chih-Jen Chang , Daniel Christian Biederman , Matthew James Webb , Wing Cheung , Jose Niell , Robert Hathaway
IPC: H04L47/80 , H04L41/042 , H04L45/00 , H04L47/2425 , H04L47/2483 , H04L47/62
Abstract: A compute device can access local or remote accelerator devices for use in processing a received packet. The received packet can be processed by any combination of local accelerator devices and remote accelerator devices. In some cases, the received packet can be encapsulated in an encapsulating packet and sent to a remote accelerator device for processing. The encapsulating packet can indicate a priority level for processing the received packet and its associated processing task. The priority level can override a priority level that would otherwise be assigned to the received packet and its associated processing task. The remote accelerator device can specify a fullness of an input queue to the compute device. Other information can be conveyed by packets transmitted between and among compute devices and remote accelerator devices to assist in determining an accelerator to use or other uses.
-
公开(公告)号:US11108697B2
公开(公告)日:2021-08-31
申请号:US15942023
申请日:2018-03-30
Applicant: Intel Corporation
Inventor: Chih-Jen Chang , Robert Southworth , Naru Dames Sundar , Yue Yang , Charles Michael Atkin , John Leshchuk
IPC: H04L12/26 , H04L12/819 , H04L12/813 , H04L12/721
Abstract: Technologies for controlling jitter at network packet egress at a source computing device include determining a switch time delta as a difference between a present switch time and a previously captured switch time upon receipt of a network packet scheduled for transmission to a target computing device and determining a host scheduler time delta as a difference between a host scheduler timestamp associated with the received network packet and a previously captured host scheduler timestamp. The source computing device is additionally configured to determine an amount of previously captured tokens present in a token bucket, determine whether there are a sufficient number of tokens available in the token bucket to transmit the received packet as a function of the switch time delta, the host scheduler time delta, and the amount of previously captured tokens present in the token bucket, and schedule the received network packet for transmission upon a determination that sufficient tokens in the token bucket.
-
-
-
-
-
-
-
-
-