NIC with Programmable Pipeline
    51.
    发明申请

    公开(公告)号:US20190140979A1

    公开(公告)日:2019-05-09

    申请号:US16012826

    申请日:2018-06-20

    Abstract: A network interface controller that is connected to a host and a packet communications network. The network interface controller includes electrical circuitry configured as a packet processing pipeline with a plurality of stages. It is determined in the network interface controller that at least a portion of the stages of the pipeline are acceleration-defined stages. Packets are processed in the pipeline by transmitting data to an accelerator from the acceleration-defined stages, performing respective acceleration tasks on the transmitted data in the accelerator, and returning processed data from the accelerator to receiving stages of the pipeline.

    Computational accelerator for packet payload operations

    公开(公告)号:US20190116127A1

    公开(公告)日:2019-04-18

    申请号:US16159767

    申请日:2018-10-15

    Abstract: Packet processing apparatus includes a first interface coupled to a host processor and a second interface configured to transmit and receive data packets to and from a packet communication network. A memory holds context information with respect to one or more flows of the data packets conveyed between the host processor and the network in accordance with a reliable transport protocol and with respect to encoding, in accordance with a session-layer protocol, of data records that are conveyed in the payloads of the data packets in the one or more flows. Processing circuitry, coupled between the first and second interfaces, transmits and receives the data packets and includes acceleration logic, which encodes and decodes the data records in accordance with the session-layer protocol using the context information while updating the context information in accordance with the serial numbers and the data records of the transmitted data packets.

    Efficient management of network traffic in a multi-CPU server

    公开(公告)号:US10164905B2

    公开(公告)日:2018-12-25

    申请号:US14608265

    申请日:2015-01-29

    Abstract: A Network Interface Controller (NIC) includes a network interface, a peer interface and steering logic. The network interface is configured to receive incoming packets from a communication network. The peer interface is configured to communicate with a peer NIC not via the communication network. The steering logic is configured to classify the packets received over the network interface into first incoming packets that are destined to a local Central Processing Unit (CPU) served by the NIC, and second incoming packets that are destined to a remote CPU served by the peer NIC, to forward the first incoming packets to the local CPU, and to forward the second incoming packets to the peer NIC over the peer interface not via the communication network.

    Accelerating and offloading lock access over a network

    公开(公告)号:US09699110B2

    公开(公告)日:2017-07-04

    申请号:US14753159

    申请日:2015-06-29

    Abstract: Lock access is managed in a data network having an initiator node and a remote target by issuing a lock command from a first process to the remote target via an initiator network interface controller to establish a lock on a memory location, and prior to receiving a reply to the lock command communicating a data access request to the memory location from the initiator network interface controller. Prior to receiving a reply to the data access request, an unlock command issues from the initiator network interface controller. The target network interface controller determines the lock content, and when permitted by the lock accesses the memory location. After accessing the memory location the target network interface controller executes the unlock command. When the lock prevents data access, the lock operation is retried a configurable number of times until data access is allowed or a threshold is exceeded.

    Network-based computational accelerator
    56.
    发明申请
    Network-based computational accelerator 审中-公开
    基于网络的计算加速器

    公开(公告)号:US20160330112A1

    公开(公告)日:2016-11-10

    申请号:US15145983

    申请日:2016-05-04

    Abstract: A data processing device includes a first packet communication interface for communication with at least one host processor via a network interface controller (NIC) and a second packet communication interface for communication with a packet data network. A memory holds a flow state table containing context information with respect to multiple packet flows conveyed between the host processor and the network via the first and second interfaces packet communication interfaces. Acceleration logic, coupled between the first and second packet communication interfaces, performs computational operations on payloads of packets in the multiple packet flows using the context information in the flow state table.

    Abstract translation: 数据处理设备包括用于经由网络接口​​控制器(NIC)与至少一个主机处理器通信的第一分组通信接口和用于与分组数据网络通信的第二分组通信接口。 存储器保持流状态表,其包含关于经由第一和第二接口分组通信接口在主处理器和网络之间传送的多个分组流的上下文信息。 耦合在第一和第二分组通信接口之间的加速逻辑使用流状态表中的上下文信息对多个分组流中的分组的有效载荷执行计算操作。

    ACCELERATING AND OFFLOADING LOCK ACCESS OVER A NETWORK
    57.
    发明申请
    ACCELERATING AND OFFLOADING LOCK ACCESS OVER A NETWORK 有权
    通过网络加速和卸载锁定访问

    公开(公告)号:US20160043965A1

    公开(公告)日:2016-02-11

    申请号:US14753159

    申请日:2015-06-29

    Abstract: Lock access is managed in a data network having an initiator node and a remote target by issuing a lock command from a first process to the remote target via an initiator network interface controller to establish a lock on a memory location, and prior to receiving a reply to the lock command communicating a data access request to the memory location from the initiator network interface controller. Prior to receiving a reply to the data access request, an unlock command issues from the initiator network interface controller. The target network interface controller determines the lock content, and when permitted by the lock accesses the memory location. After accessing the memory location the target network interface controller executes the unlock command. When the lock prevents data access, the lock operation is retried a configurable number of times until data access is allowed or a threshold is exceeded.

    Abstract translation: 在具有发起者节点和远程目标的数据网络中通过经由发起者网络接口控制器向远程目标发出锁定命令来向远程目标发出锁定访问,以在存储器位置上建立锁定,并且在接收到回复之前 该锁定命令从发起者网络接口控制器向存储器位置传送数据访问请求。 在接收到数据访问请求的答复之前,解锁命令从发起者网络接口控制器发出。 目标网络接口控制器确定锁定内容,并且当锁定允许访问存储器位置时。 访问内存位置后,目标网络接口控制器执行unlock命令。 当锁定阻止数据访问时,重试锁定操作可配置次数,直到允许数据访问或超过阈值。

    TRANSPORT-LEVEL BONDING
    58.
    发明申请
    TRANSPORT-LEVEL BONDING 有权
    运输水平联结

    公开(公告)号:US20150280972A1

    公开(公告)日:2015-10-01

    申请号:US14666342

    申请日:2015-03-24

    CPC classification number: H04L47/125 H04L41/0668 H04L45/28 H04L45/38

    Abstract: A network node includes one or more network adapters and a bonding driver. The one or more network adapters are configured to communicate respective data flows over a communication network by applying a transport layer protocol that saves communication state information in a state of a respective network adapter. The bonding driver is configured to exchange traffic including the data flows of an application program that is executed in the network node, to communicate the data flows of the traffic via one or more physical links of the one or more network adapters, and, in response to a physical-transport failure, to switch a given data flow to a different physical link or a different network path, transparently to the application program.

    Abstract translation: 网络节点包括一个或多个网络适配器和绑定驱动器。 一个或多个网络适配器被配置为通过应用在各个网络适配器的状态中保存通信状态信息的传输层协议来在通信网络上通信相应的数据流。 绑定驱动器被配置为交换包括在网络节点中执行的应用程序的数据流的流量,以经由一个或多个网络适配器的一个或多个物理链路来传送流量的数据流,并且作为响应 到物理传输故障,将给定的数据流切换到不同的物理链路或不同的网络路径,对应用程序是透明的。

    Direct IO access from a CPU's instruction stream
    59.
    发明申请
    Direct IO access from a CPU's instruction stream 有权
    从CPU的指令流直接访问IO

    公开(公告)号:US20150212817A1

    公开(公告)日:2015-07-30

    申请号:US14608252

    申请日:2015-01-29

    Abstract: A method for network access of remote memory directly from a local instruction stream using conventional loads and stores. In cases where network IO access (a network phase) cannot overlap a compute phase, a direct network access from the instruction stream greatly decreases latency in CPU processing. The network is treated as yet another memory that can be directly read from, or written to, by the CPU. Network access can be done directly from the instruction stream using regular loads and stores. Example scenarios where synchronous network access can be beneficial are SHMEM (symmetric hierarchical memory access) usages (where the program directly reads/writes remote memory), and scenarios where part of system memory (for example DDR) can reside over a network and made accessible by demand to different CPUs.

    Abstract translation: 一种使用常规负载和存储直接从本地指令流网络访问远程存储器的方法。 在网络IO访问(网络阶段)不能与计算阶段重叠的情况下,来自指令流的直接网络访问大大降低了CPU处理中的延迟。 该网络被视为可以直接从CPU读取或写入的另一个存储器。 网络访问可以直接从指令流使用常规的负载和存储。 同步网络访问可能有益的示例场景是SHMEM(对称分层存储器访问)用途(程序直接读/写远程内存的位置)以及系统内存(例如DDR)的一部分可以驻留在网络上并使其可访问的情况 通过需求到不同的CPU。

    SESSION SHARING WITH REMOTE DIRECT MEMORY ACCESS CONNECTIONS

    公开(公告)号:US20240380814A1

    公开(公告)日:2024-11-14

    申请号:US18545057

    申请日:2023-12-19

    Abstract: Systems and methods enable session sharing for session-based remote direct memory access (RDMA). Multiple queue pairs (QPs) can be added to a single session and/or session group where each of the QPs has a common remote. Systems and methods may query a session ID for an existing session group and then use the session ID with an add QP request to join additional QPs to an existing session. Newly added QPs may share one or more features with existing QPs of the session group, such as encryption parameters. Additionally, newly added QPs may be configured with different performance or quality of service requirements, thereby isolating performance, and permitting true scaling for high performance computing applications.

Patent Agency Ranking