Cryptographic Data Communication Apparatus

    公开(公告)号:US20230097439A1

    公开(公告)日:2023-03-30

    申请号:US18075460

    申请日:2022-12-06

    Abstract: In one embodiment, data communication apparatus includes packet processing circuitry to receive data from a memory responsively to a data transfer request, and cryptographically process the received data in units of data blocks using a block cipher so as to add corresponding cryptographically processed data blocks to a sequence of data packets, the sequence including respective ones of the cryptographically processed data blocks having block boundaries that are not aligned with payload boundaries of respective one of the packets, such that respective ones of the cryptographically processed data blocks are divided into two respective segments, which are contained in successive respective ones of the packets in the sequence, and a network interface which includes one or more ports for connection to a packet data network and is configured to send the sequence of data packets to a remote device over the packet data network via the one or more ports.

    COMPUTATIONAL ACCELERATOR FOR STORAGE OPERATIONS

    公开(公告)号:US20230046221A1

    公开(公告)日:2023-02-16

    申请号:US17973962

    申请日:2022-10-26

    Abstract: A method includes detecting, by an accelerator of a networking device, a serial number of a first data packet is out of order with respect to a previous data packet within a first flow of data packets associated with a packet communication network, wherein the serial number is assigned to the first data packet according to a transport protocol. The method includes reconstructing context data associated with the first flow of data packets, wherein the context data comprises encoding information for encoding of data records containing data conveyed in payloads of data packets in the first flow of data packets according to a storage protocol. The method includes using, by the accelerator, the reconstructed context data in processing a data record associated with a second data packet within the first flow, wherein the second data packet is subsequent to the first data packet in the first flow of data packets.

    Computational accelerator for storage operations

    公开(公告)号:US20230034545A1

    公开(公告)日:2023-02-02

    申请号:US17963216

    申请日:2022-10-11

    Abstract: A system includes a host processor, which has a host memory and is coupled to store data in a non-volatile memory in accordance with a storage protocol. A network interface controller (NIC) receives data packets conveyed over a packet communication network from peer computers containing, in payloads of the data packets, data records that encode data in accordance with the storage protocol for storage in the non-volatile memory. The NIC processes the data records in the data packets that are received in order in each flow from a peer computer and extracts and writes the data to the host memory, and when a data packet arrives out of order, writes the data packet to the host memory without extracting the data and processes the data packets in the flow so as to recover context information for use in processing the data records in subsequent data packets in the flow.

    Multi-socket network interface controller with consistent transaction ordering

    公开(公告)号:US20220358063A1

    公开(公告)日:2022-11-10

    申请号:US17503392

    申请日:2021-10-18

    Abstract: Computing apparatus includes a host computer, including at least first and second host bus interfaces. A network interface controller (NIC) includes a network port, for connection to a packet communication network, and first and second NIC bus interfaces, which communicate via first and second peripheral component buses with the first and second host bus interfaces, respectively. Packet processing logic, in response to packets received through the network port, writes data to the host memory concurrently via both the first and second NIC bus interfaces in a sequence of direct memory access (DMA) transactions, and after writing the data in any given DMA transaction, writes a completion report to the host memory with respect to the given DMA transaction while verifying that the completion report will be available to the CPU only after all the data in the given DMA transaction have been written to the host memory.

    Programmable congestion control communication scheme

    公开(公告)号:US11296988B2

    公开(公告)日:2022-04-05

    申请号:US16986428

    申请日:2020-08-06

    Abstract: A network adapter includes a receive (Rx) pipeline, a transmit (Tx) pipeline and congestion management circuitry. The Rx pipeline is configured to receive packets sent over a network by a peer network adapter, and to process the received packets. The Tx pipeline is configured to transmit packets to the peer network adapter over the network. The congestion management circuitry is configured to receive, from the Tx pipeline and from the Rx pipeline, Congestion-Control (CC) events derived from at least some of the packets exchanged with the peer network adapter, to exchange user-programmable congestion control packets with the peer network adapter, and to mitigate a congestion affecting one or more of the packets responsively to the CC events and the user-programmable congestion control packets.

    Explicit notification of operative conditions along a network path

    公开(公告)号:US20210344782A1

    公开(公告)日:2021-11-04

    申请号:US17198292

    申请日:2021-03-11

    Abstract: A network element includes circuitry and multiple ports. The multiple ports are configured to connect to a communication network. The circuitry is configured to receive via one of the ports a packet that originated from a source node and is destined to a destination node, the packet including a mark that is indicative of a cumulative state derived from at least bandwidth utilization conditions of output ports that were traversed by the packet along a path, from the source node up to the network element, to select a port for forwarding the packet toward the destination node, to update the mark of the packet based at least on a value of the mark in the received packet and on a local bandwidth utilization condition of the selected port, and to transmit the packet having the updated mark to the destination node via the selected port.

    Programmable Congestion Control Communication Scheme

    公开(公告)号:US20210152474A1

    公开(公告)日:2021-05-20

    申请号:US16986428

    申请日:2020-08-06

    Abstract: A network adapter includes a receive (Rx) pipeline, a transmit (Tx) pipeline and congestion management circuitry. The Rx pipeline is configured to receive packets sent over a network by a peer network adapter, and to process the received packets. The Tx pipeline is configured to transmit packets to the peer network adapter over the network. The congestion management circuitry is configured to receive, from the Tx pipeline and from the Rx pipeline, Congestion-Control (CC) events derived from at least some of the packets exchanged with the peer network adapter, to exchange user-programmable congestion control packets with the peer network adapter, and to mitigate a congestion affecting one or more of the packets responsively to the CC events and the user-programmable congestion control packets.

    Hardware-based congestion control for TCP traffic

    公开(公告)号:US10237376B2

    公开(公告)日:2019-03-19

    申请号:US15278143

    申请日:2016-09-28

    Abstract: A method for congestion control includes receiving at a destination computer a packet transmitted on a given flow, in accordance with a predefined transport protocol, through a network by a transmitting network interface controller (NIC) of a source computer, and marked by an element in the network with a forward congestion notification. Upon receiving the marked packet in a receiving NIC of the destination computer, a congestion notification packet (CNP) indicating a flow to be throttled is immediately queued for transmission from the receiving NIC through the network to the source computer. Upon receiving the CNP in the transmitting NIC, transmission of further packets on at least the flow indicated by the CNP from the transmitting NIC to the network is immediately throttled, and an indication of the given flow is passed from the transmitting NIC to a protocol processing software stack running on the source computer.

    Direct access to local memory in a PCI-E device

    公开(公告)号:US10120832B2

    公开(公告)日:2018-11-06

    申请号:US14721009

    申请日:2015-05-26

    Abstract: A method includes communicating between at least first and second devices over a bus in accordance with a bus address space, including providing direct access over the bus to a local address space of the first device by mapping at least some of the addresses of the local address space to the bus address space. In response to indicating, by the first device or the second device, that the second device requires to access a local address in the local address space that is not currently mapped to the bus address space, the local address is mapped to the bus address space, and the local address is accessed directly, by the second device, using the mapping.

    Receive queue with stride-based data scattering

    公开(公告)号:US20180267919A1

    公开(公告)日:2018-09-20

    申请号:US15460251

    申请日:2017-03-16

    Inventor: Idan Burstein

    CPC classification number: G06F13/4027 G06F9/546

    Abstract: A method for communication includes posting in a queue a sequence of work items pointing to buffer consisting of multiple strides of a common, fixed size in a memory. A NIC receives data packets from a network containing data to be pushed to the memory. The NIC reads from the queue a first work item pointing to a first buffer and writes data from a first packet to a first number of the strides in the first buffer without consuming all of the strides in the first buffer. The NIC then writes at least a part of the data from a second packet to the remaining strides in the first buffer. When all the strides in the first buffer have been consumed, the NIC reads from the queue a second work item pointing to a second buffer, and writes further data to the strides in the second buffer.

Patent Agency Ranking