Techniques for Moving Data between a Network Input/Output Device and a Storage Device

    公开(公告)号:US20190272124A1

    公开(公告)日:2019-09-05

    申请号:US16418405

    申请日:2019-05-21

    Abstract: Examples are disclosed for moving data between a network input/output (I/O) device and a storage subsystem and/or storage device. In some examples, a network I/O device coupled to a host device may receive a data frame including a request to access a storage subsystem or storage device. The storage subsystem and/or storage device may be located with the network I/O device or separately coupled to the host device through a storage controller. One or more buffers maintained in a cache for processor circuitry may be used to exchange control information or stage data associated with the data frame to avoid or eliminate use of system memory to move data to or from the storage subsystem and/or storage device. Other examples are described and claimed.

    Extension of openvswitch megaflow offloads to hardware to address hardware pipeline limitations

    公开(公告)号:US12231339B2

    公开(公告)日:2025-02-18

    申请号:US17093394

    申请日:2020-11-09

    Abstract: Methods and apparatus for extending OpenvSwitch (OVS) megaflow offloads to hardware to address hardware pipeline limitations. Under a method implemented on a compute platform including a Network Interface Controller (NIC) having one or more ports and running software including OVS software and a Linux operating system having a kernel including a TC-flower module and a NIC driver a new megaflow is created with a mask in the OVS software employing a subset of microflow fields for a packet. The microflow fields and the megaflow mask is sent to the NIC driver. A new megaflow is implemented in the NIC driver employing a subset of the microflow fields and the NIC driver creates a new hardware flow on the NIC employing a packet match scheme using all the microflow fields. The NIC also programs a NIC hardware pipeline with the new hardware flow using a match scheme that may depend on the available hardware resources, such as the size of a TCAM.

    Efficient receive interrupt signaling

    公开(公告)号:US11797333B2

    公开(公告)日:2023-10-24

    申请号:US16710556

    申请日:2019-12-11

    CPC classification number: G06F9/4812 G06F9/5077 G06F2209/5011

    Abstract: Methods for performing efficient receive interrupt signaling and associated apparatus, computing platform, software, and firmware. Receive (RX) queues in which descriptors associated with packets are enqueued are implemented in host memory and logically partitioned into pools, with each RX queue pool associated with a respective interrupt vector. Receive event queues (REQs) associated with respective RX queue pools and interrupt vectors are also implemented in host memory. Event generation is selectively enabled for some RX queues, while event generation is masked for others. In response to event causes for RX queues that are event generation-enabled, associated events are generated and enqueued in the REQs and interrupts on associated interrupt vectors are asserted. The events are serviced by accessing the events in the REQs, which identify the RX queue for the event and a next activity location at which a next descriptor to be processed is located. After asserting an interrupt, an RX queue may be auto-masked to prevent generation of additional events when new descriptors are enqueued in the RX queue.

Patent Agency Ranking