Zero-copy processing
    2.
    发明申请

    公开(公告)号:US20230099304A1

    公开(公告)日:2023-03-30

    申请号:US17488362

    申请日:2021-09-29

    Abstract: In one embodiment, a system includes a peripheral device including a memory access interface to receive from a host device headers of packets, while corresponding payloads of the packets are stored in a host memory of the host device, and descriptors being indicative of respective locations in the host memory at which the corresponding payloads are stored, a data processing unit memory to store the received headers and the descriptors without the payloads of the packets, and a data processing unit to process the received headers, wherein the peripheral device is configured, upon completion of the processing of the received headers by the data processing unit, to fetch the payloads of the packets over the memory access interface from the respective locations in the host memory responsively to respective ones of the descriptors, and packet processing circuitry to receive the headers and payloads of the packets, and process the packets.

    Communication with accelerator via RDMA-based network adapter

    公开(公告)号:US11184439B2

    公开(公告)日:2021-11-23

    申请号:US16827912

    申请日:2020-03-24

    Abstract: A network node includes a bus switching element, and a network adapter, an accelerator and a host, all coupled to communicate via the bus switching element. The network adapter is configured to communicate with remote nodes over a communication network. The host is configured to establish a RDMA link between the accelerator and the RDMA endpoint by creating a Queue Pair (QP) to be used by the accelerator for communication with the RDMA endpoint via the RDMA link. The accelerator is configured to exchange data, via the network adapter, between a memory of the accelerator and a memory of the RDMA endpoint.

    Computational accelerator for packet payload operations

    公开(公告)号:US20210203610A1

    公开(公告)日:2021-07-01

    申请号:US17204968

    申请日:2021-03-18

    Abstract: Apparatus including a first interface to a host processor, a second interface to transmit and receive data packets having headers and payloads, to and from a packet communication network, a memory holding context information regarding a flow of the data and assigning serial numbers to the data packets in the flow, according to a session-layer protocol, and processing circuitry between the first and second interfaces and having acceleration logic, to decode the data records according to the session-layer protocol, using and updating the context information based on the serial numbers and the data records of the received packets, and processing circuitry writing the decoded data records through the first interface to a host memory. The acceleration logic, upon receiving in a given flow a data packet containing a serial number that is out of order, reconstructs the context information and applies that context information in decoding data records in subsequent data packets in the flow.

    Application-assisted handling of page faults in I/O operations
    6.
    发明申请
    Application-assisted handling of page faults in I/O operations 有权
    I / O操作中页面故障的应用辅助处理

    公开(公告)号:US20140089451A1

    公开(公告)日:2014-03-27

    申请号:US13628155

    申请日:2012-09-27

    CPC classification number: G06F12/08 G06F12/1081

    Abstract: A method for data transfer includes receiving in an operating system of a host computer an instruction initiated by a user application running on the host processor identifying a page of virtual memory of the host computer that is to be used in receiving data in a message that is to be transmitted over a network to the host computer but has not yet been received by the host computer. In response to the instruction, the page is loaded into the memory, and upon receiving the message, the data are written to the loaded page.

    Abstract translation: 一种用于数据传输的方法包括在主计算机的操作系统中接收由主机处理器上运行的用户应用程序发起的指令,该指令标识主计算机的虚拟存储器的页面,该页面将用于在消息中接收数据 通过网络传送到主计算机,但尚未被主计算机接收。 响应该指令,页面被加载到存储器中,并且在接收到消息时,数据被写入加载的页面。

    Look-Ahead Handling of Page Faults in I/O Operations
    7.
    发明申请
    Look-Ahead Handling of Page Faults in I/O Operations 有权
    在I / O操作中预先处理页面错误

    公开(公告)号:US20140089450A1

    公开(公告)日:2014-03-27

    申请号:US13628075

    申请日:2012-09-27

    CPC classification number: G06F3/067 G06F3/061 G06F3/0656 G06F3/0659

    Abstract: A method for data transfer includes receiving in an input/output (I/O) operation a first segment of data to be written to a specified virtual address in a host memory. Upon receiving the first segment of the data, it is detected that a first page that contains the specified virtual address is swapped out of the host memory. At least one second page of the host memory is identified, to which a second segment of the data is expected to be written. Responsively to detecting that the first page is swapped out and to identifying the at least one second page, at least the first and second pages are swapped into the host memory. After swapping at least the first and second pages into the host memory, the data are written to the first and second pages.

    Abstract translation: 一种用于数据传输的方法包括在输入/输出(I / O)操作中接收要写入主机存储器中的指定虚拟地址的第一数据段。 在接收到数据的第一段时,检测到包含指定虚拟地址的第一页被转换出主机存储器。 标识主机存储器的至少一个第二页,期望数据的第二段被写入到其上。 响应于检测到第一页面被换出并且识别至少一个第二页面,至少第一页面和第二页面被交换到主机存储器中。 至少将第一页和第二页交换到主机存储器之后,数据被写入第一页和第二页。

    Communication with accelerator via RDMA-based network adapter

    公开(公告)号:US20200314181A1

    公开(公告)日:2020-10-01

    申请号:US16827912

    申请日:2020-03-24

    Abstract: A network node includes a bus switching element, and a network adapter, an accelerator and a host, all coupled to communicate via the bus switching element. The network adapter is configured to communicate with remote nodes over a communication network. The host is configured to establish a RDMA link between the accelerator and the RDMA endpoint by creating a Queue Pair (QP) to be used by the accelerator for communication with the RDMA endpoint via the RDMA link. The accelerator is configured to exchange data, via the network adapter, between a memory of the accelerator and a memory of the RDMA endpoint.

    LOW-LATENCY PROCESSING IN A NETWORK NODE
    10.
    发明申请
    LOW-LATENCY PROCESSING IN A NETWORK NODE 审中-公开
    网络节点中的低延迟处理

    公开(公告)号:US20150288624A1

    公开(公告)日:2015-10-08

    申请号:US14247255

    申请日:2014-04-08

    CPC classification number: H04L49/90 G06F13/382 H04L49/9047

    Abstract: A method in a network node that includes a host and an accelerator, includes holding a work queue that stores work elements, a notifications queue that stores notifications of the work elements, and control indices for adding and removing the work elements and the notifications to and from the work queue and the notifications queue, respectively. The notifications queue resides on the accelerator, and at least some of the control indices reside on the host. Messages are exchanged between a network and the network node using the work queue, the notifications queue and the control indices.

    Abstract translation: 包括主机和加速器的网络节点中的方法包括保存存储工作元素的工作队列,存储工作元素的通知的通知队列,以及用于将工作元素和通知添加到和 分别从工作队列和通知队列。 通知队列驻留在加速器上,并且至少一些控制索引驻留在主机上。 使用工作队列,通知队列和控制索引,在网络和网络节点之间交换消息。

Patent Agency Ranking