CROSS BUS MEMORY MAPPING
    21.
    发明申请

    公开(公告)号:US20220391341A1

    公开(公告)日:2022-12-08

    申请号:US17338131

    申请日:2021-06-03

    Abstract: A computerized system for efficient interaction between a host, the host having a first operating system, and a second operating system, the system comprising a subsystem on the second operating system which extracts data, directly from a buffer which is local to the host, wherein the system is operative for mapping memory from one bus associated with the first operating system to a different bus, associated with the second operating system and from which different bus the memory is accessed, thereby to emulate a connection between the first and second operating systems by cross-bus memory mapping.

    NIC with Programmable Pipeline
    22.
    发明申请

    公开(公告)号:US20190140979A1

    公开(公告)日:2019-05-09

    申请号:US16012826

    申请日:2018-06-20

    Abstract: A network interface controller that is connected to a host and a packet communications network. The network interface controller includes electrical circuitry configured as a packet processing pipeline with a plurality of stages. It is determined in the network interface controller that at least a portion of the stages of the pipeline are acceleration-defined stages. Packets are processed in the pipeline by transmitting data to an accelerator from the acceleration-defined stages, performing respective acceleration tasks on the transmitted data in the accelerator, and returning processed data from the accelerator to receiving stages of the pipeline.

    Computational accelerator for packet payload operations

    公开(公告)号:US20190116127A1

    公开(公告)日:2019-04-18

    申请号:US16159767

    申请日:2018-10-15

    Abstract: Packet processing apparatus includes a first interface coupled to a host processor and a second interface configured to transmit and receive data packets to and from a packet communication network. A memory holds context information with respect to one or more flows of the data packets conveyed between the host processor and the network in accordance with a reliable transport protocol and with respect to encoding, in accordance with a session-layer protocol, of data records that are conveyed in the payloads of the data packets in the one or more flows. Processing circuitry, coupled between the first and second interfaces, transmits and receives the data packets and includes acceleration logic, which encodes and decodes the data records in accordance with the session-layer protocol using the context information while updating the context information in accordance with the serial numbers and the data records of the transmitted data packets.

    Low-latency processing in a network node

    公开(公告)号:US10218645B2

    公开(公告)日:2019-02-26

    申请号:US14247255

    申请日:2014-04-08

    Abstract: A method in a network node that includes a host and an accelerator, includes holding a work queue that stores work elements, a notifications queue that stores notifications of the work elements, and control indices for adding and removing the work elements and the notifications to and from the work queue and the notifications queue, respectively. The notifications queue resides on the accelerator, and at least some of the control indices reside on the host. Messages are exchanged between a network and the network node using the work queue, the notifications queue and the control indices.

Patent Agency Ranking