COMMUNICATION EQUIPMENT, COMMUNICATION METHODS AND PROGRAMS

    公开(公告)号:US20220360534A1

    公开(公告)日:2022-11-10

    申请号:US17624464

    申请日:2019-07-04

    摘要: An object is to provide a communication apparatus, a communication method, and a program capable of avoiding an increase in network load when input traffic continues to be large and a communication delay when input traffic is very small. A communication apparatus according to the present invention prepares three token buckets and can transfer, discard, or hold a packet in accordance with the amount of tokens in each token bucket. This enables the communication apparatus to operate so as not to exceed a set maximum bandwidth when large traffic is received for the delay guarantee shaping. Further, When the maximum bandwidth is exceeded, the communication apparatus can select whether to discard a packet to prioritize a delay guarantee or to hold a packet to prioritize no loss of packets. Furthermore, the communication apparatus can immediately transmit a packet without increasing a communication delay when input traffic is very small.

    Processing packets in an electronic device

    公开(公告)号:US11489785B1

    公开(公告)日:2022-11-01

    申请号:US16943666

    申请日:2020-07-30

    申请人: Innovium, Inc.

    摘要: A network traffic manager receives, from an ingress port in a group of ingress ports, a cell of a packet destined for an egress port. Upon determining that a number of cells of the packet stored in a buffer queue meets a threshold value, the manager checks whether the group of ingress ports has been assigned a token for the queue. Upon determining that the group of ingress ports has been assigned the token, the manager determines that other cells of the packet are stored in the buffer, and accordingly stores the received cell in the buffer, and stores linking information for the received cell in a receive context for the packet. When all cells of the packet have been received, the manager copies linking information for the packet cells to the buffer queue or a copy generator queue, and releases the token from the group of ingress ports.

    Systems and methods for predictive scheduling and rate limiting

    公开(公告)号:US11431646B2

    公开(公告)日:2022-08-30

    申请号:US17093200

    申请日:2020-11-09

    申请人: Intel Corporation

    摘要: Systems and methods are disclosed for enhancing network performance by using modified traffic control (e.g., rate limiting and/or scheduling) techniques to control a rate of packet (e.g., data packet) traffic to a queue scheduled by a Quality of Service (QoS) engine for reading and transmission. In particular, the QoS engine schedules packets using estimated packet sizes before an actual packet size is known by a direct memory access (DMA) engine coupled to the QoS engine. The QoS engine subsequently compensates for discrepancies between the estimated packet sizes and actual packet sizes (e.g., when the DMA engine has received an actual packet size of the scheduled packet). Using these modified traffic control techniques that leverage estimating packet sizes may reduce and/or eliminate latency introduced due to determining actual packet sizes.

    Redundant Media Packet Streams
    65.
    发明申请

    公开(公告)号:US20220210210A1

    公开(公告)日:2022-06-30

    申请号:US17573115

    申请日:2022-01-11

    摘要: This invention concerns the transmitting and receiving of digital media packets, such as audio and video channels and lighting instructions. In particular, the invention concerns the transmitting and receiving of redundant media packet streams. Samples are extracted from a first and second media packet stream. The extracted samples are written to a buffer based on the output time of each sample. Extracted samples having the same output in time are written to the same location in the buffer. Both media packet streams are simply processed all the way to the bugger without any particular knowledge that one of the packet streams is actually redundant. This simplifies the management of the redundant packet streams, such as eliminating the need for a “fail-over” switch and the concept of an “active stream”, the location is the storage space allocated to store one sample. The extracted sample written to the location may be written over another extracted sample from a different packet stream previously written to the location. These extracted samples written to the same location may be identical.

    OUT OF ORDER PACKET SCHEDULER
    66.
    发明申请

    公开(公告)号:US20220210088A1

    公开(公告)日:2022-06-30

    申请号:US17697555

    申请日:2022-03-17

    摘要: An example method may include identifying a first transmit identifier (TID) associated with a first node of a wireless network as ready to transmit and adding the first TID to a ready to transmit queue at a first point in time. The method may also include identifying a second TID associated with a second node of the wireless network as ready to transmit, and adding the second TID to the ready to transmit queue at a second point in time later than the first point in time. The method may additionally include selecting the second TID from the ready to transmit queue before selecting the first TID based on a projected increased overall throughput of packets within the wireless network when communicating with the second node before communicating with the first node.

    Network congestion notification method, agent node, and computer device

    公开(公告)号:US11374870B2

    公开(公告)日:2022-06-28

    申请号:US16786461

    申请日:2020-02-10

    发明人: Wei Zhang

    摘要: This disclosure relates to the field of data communication, and provides a network congestion notification method, an agent node, and a computer device. When receiving a first data packet, an agent node adds a source queue pair number to a first data packet to obtain a second data packet, and sends the second data packet to a receive end by using a network node. In a process of forwarding the second data packet, if the network node detects network congestion, the network node generates a first congestion notification packet carrying the source queue pair number, and sends the first congestion notification packet to the agent node. Further, the agent node sends the first congestion notification packet to a transmit end, so that the transmit end decreases a sending rate of a data flow to which the first data packet belongs.

    Packet-flow message-distribution system

    公开(公告)号:US11343197B2

    公开(公告)日:2022-05-24

    申请号:US16948402

    申请日:2020-09-16

    摘要: Switchless interconnect fabric message distribution includes end-to-end partitioning of message pathways or multiple priority levels with interrupt capability. A switchless interconnect fabric message distribution system includes a data distribution module and at least two host-bus adapters connected to the data distribution module. The data distribution module includes partition first in first out buffers. Each of the host-bus adapters includes an input manager connected to input priority first in first out buffers and an output manager connected to priority first in first out buffers.