Reordering avoidance for flows during transition between slow-path handling and fast-path handling

    公开(公告)号:US20200167192A1

    公开(公告)日:2020-05-28

    申请号:US16202132

    申请日:2018-11-28

    Abstract: A computer system includes one or more processors, one or more hardware accelerators, and control circuitry. The processors are configured to run software that executes tasks in a normal mode. The accelerators are configured to execute the tasks in an accelerated mode. The control circuitry is configured to receive one or more flows of tasks for execution by the processors and the accelerators, assign one or more initial tasks of each flow for execution by the processors, assign subsequent tasks of each flow for execution by the accelerators, and verify, for each flow, that the accelerators do not execute the subsequent tasks of the flow until the processors have fully executed the initial tasks of the flow.

    FACILITATING VIRTUAL FUNCTIONS USING MEMORY ALLOCATION IN A VIRTUALIZATION ENVIRONMENT

    公开(公告)号:US20190332291A1

    公开(公告)日:2019-10-31

    申请号:US15963236

    申请日:2018-04-26

    Abstract: Apparatuses and methods are described that provide for a mechanism for allocating physical device memory for one or more virtual functions. In particular, a memory allocating framework is provided to utilize device memory more efficiently by mapping at least one target location of the physical memory in a Base Address Register (BAR) associated with the virtual function from a plurality of available target locations based on an allocation request. The memory allocating framework is further configured to compare an indication associated with the requesting virtual function to an identifier of the requested target location. Moreover, the memory allocating framework is further configured to allow the simultaneous use of more than one virtual function at a time while providing isolation between multiple virtual functions.

    Network adapter with a common queue for both networking and data manipulation work requests

    公开(公告)号:US20190171612A1

    公开(公告)日:2019-06-06

    申请号:US16224834

    申请日:2018-12-19

    Abstract: A network adapter includes a network interface that communicates packets over a network, a host interface connected locally to a host processor and to a host memory, and processing circuitry, coupled between the network interface and the host interface, and is configured to receive in a common queue, via the host interface, (i) a processing work item specifying a source buffer in the host memory, a data processing operation, and a first address in the host memory, and (ii) an RDMA write work item specifying the first address, and a second address in a remote memory. In response to the processing work item, the processing circuitry reads data from the source buffer, applies the data processing operation, and stores the processed data in the first address. In response to the RDMA write work item the processing circuitry transmits the processed data, over the network, for storage in the second address.

    Network operation offloading for collective operations

    公开(公告)号:US10158702B2

    公开(公告)日:2018-12-18

    申请号:US14937907

    申请日:2015-11-11

    Abstract: A Network Interface (NI) includes a host interface, which is configured to receive from a host processor of a node one or more work requests that are derived from an operation to be executed by the node. The NI maintains a plurality of work queues for carrying out transport channels to one or more peer nodes over a network. The NI further includes control circuitry, which is configured to accept the work requests via the host interface, and to execute the work requests using the work queues by controlling an advance of at least a given work queue according to an advancing condition, which depends on a completion status of one or more other work queues, so as to carry out the operation.

    PCI-EXPRESS DEVICE SERVING MULTIPLE HOSTS
    77.
    发明申请
    PCI-EXPRESS DEVICE SERVING MULTIPLE HOSTS 审中-公开
    PCI-EXPRESS DEVICES服务多个主机

    公开(公告)号:US20140129741A1

    公开(公告)日:2014-05-08

    申请号:US13670485

    申请日:2012-11-07

    CPC classification number: G06F13/382

    Abstract: A method includes establishing in a peripheral device at least first and second communication links with respective first and second hosts. The first communication link is presented to the first host as the only communication link with the peripheral device, and the second communication link is presented to the second host as the only communication link with the peripheral device. The first and second hosts are served simultaneously by the peripheral device over the respective first and second communication links.

    Abstract translation: 一种方法包括在外围设备中建立与相应的第一和第二主机的至少第一和第二通信链路。 将第一通信链路作为与外围设备的唯一通信链路呈现给第一主机,并且将第二通信链路作为与外围设备的唯一通信链路呈现给第二主机。 第一和第二主机由外围设备在相应的第一和第二通信链路上同时服务。

    Secure and efficient distributed processing

    公开(公告)号:US20250148103A1

    公开(公告)日:2025-05-08

    申请号:US19017665

    申请日:2025-01-12

    Abstract: In one embodiment, a secure distributed processing system includes a plurality of nodes connected over a network, and configured to process a plurality of tasks, each one of the nodes including a processor to process task-specific data, and a network interface controller (NIC) to connect to other ones of the nodes over the network, compute task-and-node-specific communication keys for securing communication with ones of the nodes over the network based on task-specific master keys and node-specific data, and securely communicate the processed task-specific data with the ones of the nodes over the network based on the task-and-node-specific communication keys.

Patent Agency Ranking