PREVENTING AUDIO DROPOUT
    1.
    发明申请

    公开(公告)号:US20220086209A1

    公开(公告)日:2022-03-17

    申请号:US17022429

    申请日:2020-09-16

    申请人: Kyndryl, Inc.

    摘要: Embodiments of the present invention provide methods, computer program products, and systems. Embodiments of the present invention detect an audio stream comprising one or more voice packets from a first computing system. Embodiments of the present invention can, in response to detecting an audio stream, dynamically prevent audio drop out on a second computing system using circular buffers based on network consistency.

    HYBRID PACKET MEMORY FOR BUFFERING PACKETS IN NETWORK DEVICES

    公开(公告)号:US20220038384A1

    公开(公告)日:2022-02-03

    申请号:US17503035

    申请日:2021-10-15

    摘要: A network device processes received packets to determine port or ports of the network device via which to transmit the packets. The network device classifies the packets into packet flows and selects, based at least in part on one or more characteristics of data being transmitted in the respective packet flows, a first packet memory having a first memory access bandwidth or a second packet memory having a second memory access bandwidth, and buffers the packets in the selected first or second packet memory which the packets are being processed by the network device. After processing the packets, the network device retrieves the packets from the first packet memory or the second packet memory in which the packets are buffered, and forwards the packets to the determined one or more ports for transmission of the packets.

    Guaranteed delivery in receiver side overcommitted communication adapters

    公开(公告)号:US11115340B2

    公开(公告)日:2021-09-07

    申请号:US16018205

    申请日:2018-06-26

    摘要: Aspects of the invention include receiving an input/output (I/O) request that includes a data stream from a host processor. The receiving is at a network adapter of a storage controller that manages storage for the host processor. The storage controller includes a storage buffer to store data received from the host processor before migrating it to the storage. The storage controller also includes a data cache. It is determined whether the storage buffer has enough free space to store the received data stream. Based at least in part on determining that the storage buffer has enough free space to store the received data stream, the received data stream is stored by the network adapter in the storage. Based at least in part on determining that the storage buffer does not have enough free space to store the received data stream, the received data stream is stored in the data cache.

    Technologies for scalable network packet processing with lock-free rings

    公开(公告)号:US10999209B2

    公开(公告)日:2021-05-04

    申请号:US15635581

    申请日:2017-06-28

    申请人: Intel Corporation

    摘要: Technologies for network packet processing include a computing device that receives incoming network packets. The computing device adds the incoming network packets to an input lockless shared ring, and then classifies the network packets. After classification, the computing device adds the network packets to multiple lockless shared traffic class rings, with each ring associated with a traffic class and output port. The computing device may allocate bandwidth between network packets active during a scheduling quantum in the traffic class rings associated with an output port, schedule the network packets in the traffic class rings for transmission, and then transmit the network packets in response to scheduling. The computing device may perform traffic class separation in parallel with bandwidth allocation and traffic scheduling. In some embodiments, the computing device may perform bandwidth allocation and/or traffic scheduling on each traffic class ring in parallel. Other embodiments are described and claimed.

    Efficient Scatter-Gather Over an Uplink
    5.
    发明申请

    公开(公告)号:US20190149486A1

    公开(公告)日:2019-05-16

    申请号:US16181376

    申请日:2018-11-06

    摘要: A network interface device is connected to a host computer by having a memory controller, and a scatter-gather offload engine linked to the memory controller. The network interface device prepares a descriptor including a plurality of specified memory locations in the host computer, incorporates the descriptor in exactly one upload packet, transmits the upload packet to the scatter-gather offload engine via the uplink, invokes the scatter-gather offload engine to perform memory access operations cooperatively with the memory controller at the specified memory locations of the descriptor, and accepts results of the memory access operations.

    Network interface
    6.
    发明授权

    公开(公告)号:US10284672B2

    公开(公告)日:2019-05-07

    申请号:US15026946

    申请日:2014-10-17

    申请人: ZOMOJO PTY LTD

    发明人: Matthew Chapman

    摘要: A low-latency network interface and complementary data management protocols are disclosed in this specification. The data management protocols reduce dedicated control exchanges between the network interface and a corresponding host computing system by consolidating control data with network data. The network interface may also facilitate port forwarding and data logging without an external network switch.

    Packet descriptor storage in packet memory with cache

    公开(公告)号:US10200313B2

    公开(公告)日:2019-02-05

    申请号:US15610909

    申请日:2017-06-01

    摘要: A first memory device stores (i) a head part of a FIFO queue structured as a linked list (LL) of LL elements arranged in an order in which the LL elements were added to the FIFO queue and (ii) a tail part of the FIFO queue. A second memory device stores a middle part of the FIFO queue, the middle part comprising a LL elements following, in an order, the head part and preceding, in the order, the tail part. A queue controller retrieves LL elements in the head part from the first memory device, moves LL elements in the middle part from the second memory device to the head part in the first memory device prior to the head part becoming empty, and updates LL parameters corresponding to the moved LL elements to indicate storage of the moved LL elements changing from the second memory device to the first memory device.

    High-speed packet processing system and control method thereof

    公开(公告)号:US10187330B2

    公开(公告)日:2019-01-22

    申请号:US15595150

    申请日:2017-05-15

    摘要: A high-speed packet processing system and a method of controlling the system are disclosed. The high-speed packet processing system includes: a network interface card configured to receive or transmit packets; a memory which is accessible by an operating system, and which includes at least one or more data buffers and a single dedicated head (dedicated skb) decoupled from the data buffers, where the data buffers are pre-allocated in correspondence to the packets to allow storing of the packets, and the single dedicated head is connected to the data buffers sequentially in correspondence to the packets; and a packet processing unit configured to sequentially connect the single dedicated head with the data buffers and store the packets sequentially in the data buffers corresponding to reception (Rx) descriptors based on the reception (Rx) descriptors designated in correspondence to the packets, when the packets are received.

    METHOD FOR TRANSFERRING TRANSMISSION DATA FROM A TRANSMITTER TO A RECEIVER FOR PROCESSING THE TRANSMISSION DATA AND MEANS FOR CARRYING OUT THE METHOD

    公开(公告)号:US20190007348A1

    公开(公告)日:2019-01-03

    申请号:US16064100

    申请日:2016-12-20

    发明人: Wolfgang RÖHRL

    摘要: A method involves transferring a transmittal data block from a transmitting device via an Ethernet connection to a receiving device which has a storage for storing a transferred transmittal data block, and a processor for at least partially processing the transferred transmittal data block stored in the storage. The transmitting device forms from the data of the transmittal data block a sequence of Ethernet packets, comprising respectively management data and a transmittal data sub-block. The receiving device receives the Ethernet packets of the respective sequence and, while employing at least a part of the management data, writes the transmittal data sub-blocks of the received Ethernet packets of the sequence of Ethernet packets for the transmittal data block to the storage, wherein not upon or after the writing each of the transmittal data sub-blocks an interrupt is sent to the processor.