-
公开(公告)号:US20170293765A1
公开(公告)日:2017-10-12
申请号:US15093200
申请日:2016-04-07
Applicant: Intel Corporation
Inventor: Chang Yong Kang , Pierre Laurent
CPC classification number: G06F21/602 , G06F9/30007 , G06F9/30101 , G06F9/3867 , G06F21/64
Abstract: A processing system implementing techniques for parallelized authentication encoding is provided. In one embodiment, the processing system includes an accumulator, a register representing a pipeline stage and a processing core coupled to the accumulator and to the register. The processing core is to split an input message into a first input stream and a second input stream. For each input stream, the processing core is further to add, to the accumulator, a data block from the input stream. Contents of the accumulator multiplied by a squared nonce value are stored in the register and a result of applying a modulo reduction operation to the contents of the register is stored in the accumulator. Thereupon, an authentication tag for the input message is generated based on the result stored in the accumulator and the contents of the register.
-
公开(公告)号:US20170149678A1
公开(公告)日:2017-05-25
申请号:US15396488
申请日:2016-12-31
Applicant: INTEL CORPORATION
Inventor: Cristian Florin Dumitrescu , Andrey Chilikin , Pierre Laurent , Kannan Babu Ramia , Sravanthi Tangeda
IPC: H04L12/869 , H04L12/803 , H04L12/819 , H04L12/813 , H04L12/815 , H04L12/851
CPC classification number: H04L47/60 , H04L12/1439 , H04L47/10 , H04L47/125 , H04L47/20 , H04L47/21 , H04L47/215 , H04L47/22 , H04L47/2408 , H04L47/2433 , H04L47/2441 , H04L47/39 , H04L47/50 , H04L47/527 , H04L47/623 , H04L47/6255 , H04L47/627 , H04L47/6275
Abstract: One embodiment provides a network device. The network device includes a a processor including at least one processor core; a network interface configured to transmit and receive packets at a line rate; a memory configured to store a scheduler hierarchical data structure; and a scheduler module. The scheduler module is configured to prefetch a next active pipe structure, the next active pipe structure included in the hierarchical data structure, update credits for a current pipe and an associated subport, identify a next active traffic class within the current pipe based, at least in part, on a current pipe data structure, select a next queue associated with the identified next active traffic class, and schedule a next packet from the selected next queue for transmission by the network interface if available traffic shaping token bucket credits and available traffic class credits are greater than or equal to a next packet credits.
-
公开(公告)号:US11080202B2
公开(公告)日:2021-08-03
申请号:US15721800
申请日:2017-09-30
Applicant: Intel Corporation
Inventor: Niall D. McDonnell , Christopher MacNamara , John J. Browne , Andrew Cunningham , Brendan Ryan , Patrick Fleming , Namakkal N. Venkatesan , Bruce Richardson , Tomasz Kantecki , Sean Harte , Pierre Laurent
IPC: G06F12/08 , G06F12/0888 , G06F12/0806 , G06F12/0817 , G06F12/0837 , G06F9/00
Abstract: A computing apparatus, including: a processor; a pointer to a counter memory location; and a lazy increment counter engine to: receive a stimulus to update the counter; and lazy increment the counter including issuing a weakly-ordered increment directive to the pointer.
-
公开(公告)号:US10999209B2
公开(公告)日:2021-05-04
申请号:US15635581
申请日:2017-06-28
Applicant: Intel Corporation
Inventor: John J. Browne , Tomasz Kantecki , Chris Macnamara , Pierre Laurent , Sean Harte , Peter McCarthy , Jacqueline F. Jardim , Liang Ma
IPC: H04L12/873 , H04L12/883 , H04L12/879 , H04L12/927 , H04L12/863 , H04L12/869
Abstract: Technologies for network packet processing include a computing device that receives incoming network packets. The computing device adds the incoming network packets to an input lockless shared ring, and then classifies the network packets. After classification, the computing device adds the network packets to multiple lockless shared traffic class rings, with each ring associated with a traffic class and output port. The computing device may allocate bandwidth between network packets active during a scheduling quantum in the traffic class rings associated with an output port, schedule the network packets in the traffic class rings for transmission, and then transmit the network packets in response to scheduling. The computing device may perform traffic class separation in parallel with bandwidth allocation and traffic scheduling. In some embodiments, the computing device may perform bandwidth allocation and/or traffic scheduling on each traffic class ring in parallel. Other embodiments are described and claimed.
-
公开(公告)号:US10606751B2
公开(公告)日:2020-03-31
申请号:US15201348
申请日:2016-07-01
Applicant: INTEL CORPORATION
Inventor: Andrew Cunningham , Mark D. Gray , Alexander Leckey , Chris MacNamara , Stephen T. Palermo , Pierre Laurent , Niall D. McDonnell , Tomasz Kantecki , Patrick Fleming
IPC: G06F12/0811 , G06F12/0831
Abstract: An input/output (I/O) device arranged to receive an information element including a payload, determine control information from the information element, classify the information element based on the control information, and issue a write to one of a plurality of computer-readable media based on the classification of the information element, the write to cause the payload to be written to the one of the plurality of computer-readable media.
-
公开(公告)号:US10158578B2
公开(公告)日:2018-12-18
申请号:US15269295
申请日:2016-09-19
Applicant: INTEL CORPORATION
Inventor: Cristian Florin Dumitrescu , Andrey Chilikin , Pierre Laurent , Kannan Babu Ramia , Sravanthi Tangeda
IPC: H04L12/14 , H04L12/869 , H04L12/873 , H04L12/815 , H04L12/863 , H04L12/819 , H04L12/801 , H04L12/813 , H04L12/865 , H04L12/803 , H04L12/851
Abstract: One embodiment provides a network device. The network device includes a a processor including at least one processor core; a network interface configured to transmit and receive packets at a line rate; a memory configured to store a scheduler hierarchical data structure; and a scheduler module. The scheduler module is configured to prefetch a next active pipe structure, the next active pipe structure included in the hierarchical data structure, update credits for a current pipe and an associated subport, identify a next active traffic class within the current pipe based, at least in part, on a current pipe data structure, select a next queue associated with the identified next active traffic class, and schedule a next packet from the selected next queue for transmission by the network interface if available traffic shaping token bucket credits and available traffic class credits are greater than or equal to a next packet credits.
-
公开(公告)号:US09870339B2
公开(公告)日:2018-01-16
申请号:US14752047
申请日:2015-06-26
Applicant: Intel Corporation
Inventor: Chang Yong Kang , Pierre Laurent , Hari K. Tadepalli , Prasad M. Ghatigar , T. J. O'Dwyer , Serge Zhilyaev
CPC classification number: G06F15/8061 , G06F9/30036 , G06F9/3814 , G06F9/3834 , G06F9/3836 , G06F9/3838 , G06F9/3853 , G06F9/3867 , G06F9/3877 , G06F13/16 , G06F13/4059
Abstract: Methods and apparatuses relating to tightly-coupled heterogeneous computing are described. In one embodiment, a hardware processor includes a plurality of execution units in parallel, a switch to connect inputs of the plurality of execution units to outputs of a first buffer and a plurality of memory banks and connect inputs of the plurality of memory banks and a plurality of second buffers in parallel to outputs of the first buffer, the plurality of memory banks, and the plurality of execution units, and an offload engine with inputs connected to outputs of the plurality of second buffers.
-
公开(公告)号:US20180006970A1
公开(公告)日:2018-01-04
申请号:US15199110
申请日:2016-06-30
Applicant: Intel Corporation
Inventor: John J. Browne , Tomasz Kantecki , Chris MacNamara , Pierre Laurent , Sean Harte
IPC: H04L12/879 , H04L12/935 , H04L12/927 , H04L12/861 , H04L12/43
CPC classification number: H04L49/901 , H04L12/43 , H04L12/4625 , H04L47/803 , H04L49/3063 , H04L49/9042
Abstract: Technologies for scalable packet reception and transmission include a network device. The network device is to establish a ring that is defined as a circular buffer and includes a plurality of slots to store entries representative of packets. The network device is also to generate and assign receive descriptors to the slots in the ring. Each receive descriptor includes a pointer to a corresponding memory buffer to store packet data. The network device is further to determine whether the NIC has received one or more packets and copy, with direct memory access (DMA) and in response to a determination that the NIC has received one or more packets, packet data of the received one or more packets from the NIC to the memory buffers associated with the receive descriptors assigned to the slots in the ring.
-
公开(公告)号:US09769290B2
公开(公告)日:2017-09-19
申请号:US14286975
申请日:2014-05-23
Applicant: Intel Corporation
Inventor: Cristian Florin F. Dumitrescu , Namakkal N. Venkatesan , Pierre Laurent , Bruce Richardson
IPC: H04L12/743 , H04L29/06 , H04L12/64
CPC classification number: H04L69/22 , H04L12/6418 , H04L45/7453
Abstract: Technologies for packet flow classification on a computing device include a hash table including a plurality of hash table buckets in which each hash table bucket maps a plurality of keys to corresponding traffic flows. The computing device performs packet flow classification on received data packets, where the packet flow classification includes a plurality of sequential classification stages and fetch classification operations and non-fetch classification operations are performed in each classification stage. The fetch classification operations include to prefetch a key of a first received data packet based on a set of packet fields of the first received data packet for use during a subsequent classification stage, prefetch a hash table bucket from the hash table based on a key signature of the prefetched key for use during another subsequent classification stage, and prefetch a traffic flow to be applied to the first received data packet based on the prefetched hash table bucket and the prefetched key. The computing device handles processing of received data packets such that a fetch classification operation is performed by the flow classification module on the first received data packet while a non-fetch classification operation is performed by the flow classification module on a second received data packet.
-
公开(公告)号:US11671382B2
公开(公告)日:2023-06-06
申请号:US15185864
申请日:2016-06-17
Applicant: Intel Corporation
Inventor: John J. Browne , Seán Harte , Tomasz Kantecki , Pierre Laurent , Chris MacNamara
IPC: H04N21/443 , H04L49/9057 , H04N21/426 , H04L49/103 , H04L1/00 , H04L49/102 , H04L49/00 , H04L49/9005 , H04N21/232
CPC classification number: H04L49/9057 , H04L1/0016 , H04L49/102 , H04L49/103 , H04L49/3063 , H04N21/42692 , H04N21/4435 , H04L1/002 , H04L49/9005 , H04N21/2326
Abstract: Technologies for coordinating access to packets include a network device. The network device is to establish a ring in a memory of the network device. The ring includes a plurality of slots. The network device is also to allocate cores to each of an input stage, an output stage, and a worker stage. The worker stage is to process data in a data packet with an associated worker function. The network device is also to add, with the input stage, an entry to a slot in the ring representative of a data packet received with a network interface controller of the network device, access, with the worker stage, the entry in the ring to process at least a portion of the data packet, and provide, with the output stage, the processed data packet to the network interface controller for transmission.
-
-
-
-
-
-
-
-
-