摘要:
A method of handling data packets in a series of network switches is disclosed. An incoming data packet is received at a data port of a first lower capacity switch of the series of network switches and a stack tag is resolved from a header of the incoming data packet. The incoming data packet is forwarded to a first higher capacity switch, on a first stacked connection operating at a first data rate, based on the stack tag. A destination address of said incoming data packet is resolved by the first higher capacity switch and the header of the incoming packet is modified. The incoming data packet is forwarded to a second higher capacity switch, on a second stacked connection operating at a second data rate, based on the resolved destination address, where the header of the incoming data packet is modified and the incoming data packet is forwarded to a second lower capacity switch on a third stacked connection operating at the first data rate. Lastly, an egress port of the second lower capacity switch is determined based on the stack tag and the incoming data packet is forwarded to the egress port. A network switch configured perform the above method of handling data packets is also disclosed.
摘要:
A method of forwarding data in a network switch fabric is disclosed. An incoming data packet is received at a first port of the fabric and a first packet portion, less than a full packet length, is read to determine particular packet information, the particular packet information including a source address and a destination address. An egress port bitmap is determined based on a lookup in a forwarding table and it is determined if the destination address belongs to a trunk group of trunked ports. The incoming data packet is forwarded based on the egress port bitmap, when the destination address does not belong to the trunk group. When the destination address does belong to the trunk group, a particular trunked port of the trunk group is determined and the incoming data packet is forwarded thereto. More specifically, the particular trunked port of the trunk group may be determined by calculating a hash value based on the source address and the destination value and selecting the particular trunked port based on the hash value. Additionally, a class of service for the incoming data packet is also determined from the particular packet information and a priority for forwarding is set based on the class of service.
摘要:
An adaptive weighted round robin scheduling apparatus and method schedules variable-length frame transmissions from a plurality of output queues having different transmission priorities by first allocating, for each queue, a number of bandwidth segments for a bandwidth cycle and a number of transmission opportunities for a round robin cycle, and then processing the queues consecutively in a round-robin fashion, beginning with a highest priority queue, until none of the queues has any bandwidth remaining. More specifically, during each iteration of a round robin cycle, a queue is permitted to transmit a frame if the queue has at least one remaining transmission opportunity, the queue has a frame ready for transmission, and the queue has at least one remaining bandwidth segment, and furthermore the number of transmission opportunities for the queue is decremented by at least one. Upon transmitting a frame, the number of bandwidth segments for the queue is decreased by the number of bandwidth segments in the frame. If a queue has no frame ready for transmission, then the queue may be either penalized, in which case the number of bandwidth segments for the queue is reduced, or forced to forfeit its bandwidth segments, in which case any remaining bandwidth segments are reallocated to other queues and the number of bandwidth segments and the number of transmission opportunities for the queue are set to zero.
摘要:
A method and a system for improving communication performance between nodes in a network is disclosed. In one embodiment, the system includes routers, switches, and a communication interface. The communication interface detects a communication flow between a source and a destination in response to a flow criteria. Upon detecting the communication flow, the communication interface issues a resolution request for identifying data path. After receipt of a response to the resolution request, multiple connections between switches are established in response to levels of quality of service (QoS).
摘要:
Apparatus and method for more precise controlling of congestion on a network, provides for remote controlling of a remote station on the network by a local station to configure the remote station into a remote loopback configuration. With the remote station thus configured, the local station is then able to determine the link latency of the link, during auto-negotiation, for example. Provided with the link latency, a congestion control algorithm in the local station may be adjusted to account for the link latency to better control the input data streams by controlling when the congestion relieving control signal, such as a PAUSE frame, is transmitted to the remote station to inhibit transmission and relieve congestion.
摘要:
A network interface transmits data packets between a host computer and a network and includes a first in first out (FIFO) buffer memory with an adaptive transmit start point determined for each data packet. The network interface received data packets from the host computer via a peripheral component interconnect (PCI). A FIFO control determines the byte length of each data packet, measures the minimum fill time indicating the time necessary to fill the FIFO buffer memory with a predetermined minimum amount of data necessary before transmission by the FIFO buffer memory, and calculates the time to fill the FIFO buffer memory with each packet based on the determined length and the measured minimum fill time. The time to empty the packet from the FIFO buffer memory is also calculated based upon the length of the packet and predetermined network transmission rates. If the time to empty the packet from the FIFO buffer memory is greater than or equal to the time to fill the FIFO buffer memory, the transmit start point is set to the predetermined minimum amount, otherwise the transmit start point is adjusted in accordance with the difference in time between filling and emptying the FIFO buffer memory with the packet, a FIFO fill rate based on the measured minimum fill time, and a coefficient that accounts for latencies in the PCI bus. The network interface thus provides an optimal transmit start point for each data packet, minimizing latency and underflow conditions during network transmission.
摘要:
Interpacket delay times are modified in full-duplex Ethernet network devices by calculating for each network station a delay interval based on a time to transmit a data packet at the network rate and a calculated time to transmit the data packet at a desired transmission rate. The network station waits the calculated delay time following a packet transmission before transmitting the next data packet, ensuring that the overall output transmission rate of the network station corresponds to the assigned desired transmission rate. The desired transmission rate is received as a media access control (MAC) control frame from a network management entity, such as a switched hub. Hence, each network station operates at the desired transmission rate, minimizing the occurrence of congestion and eliminating the necessity of PAUSE frames.
摘要:
Delay times are modified in Ethernet network devices by adding a randomized time interval generated in accordance with a propagation delay between two network stations. A server in a client-server arrangement is given priority access over clients by adding to the clients' InterPacket Gap (IPG) interval a random time delay between one and two times the cable delay between the server and the corresponding client. The server can access the network media after the IPG interval, whereas clients must wait the additional random time delay before accessing the media, thereby improving server throughput and overall network throughput. Collision mediation is improved by adding a randomly selected integer multiple of a propagation delay between two stations, where the integer multiplier is randomly selected from a predetermined range of integers. The randomly selected integer multiple of the propagation delay provides a second dimension of random selection to minimize subsequent collisions and minimize the occurrence of capture effects in losing stations.
摘要:
Delay times are modified in Ethernet network devices by adding an integer multiple of a delay interval to the minimum interpacket gap (IPG) interval, and decrementing a deferral counter storing the integer in each network station in response to detected activity on the media. Each station independently determines the number of stations active on the network media by counting the number of successful packet receptions following a corresponding detected collision. Once the number of detected collisions equals the number of stations (N) minus one, each station independently establishes a unique integer value from the range of zero to the number of detected collisions, i.e., up to the number of stations (N) minus one, by resetting the deferral counter to (N-1) after a successful transmission, and by decrementing the deferral counter upon detection of a successful transmission without collision by another station. The unique integer value ensures that each station has a different delay interval in accessing the media after sensing deassertion of the receive carrier. Each network station also includes a deferral timer that counts the maximum delay interval of (N-1) delay intervals plus the minimum IPG value, and thus establishes a bounded access latency for a half-duplex shared network.
摘要:
A network includes a combination of carrier-sense stations and Universal Multiple Access (UMA) stations using a time slot multiple access protocol. The network is configured to include assigned time slots for the respective UMA stations and unassigned time slots reserved for the carrier-sense stations to access the shared network media. Each of the UMA stations is provided with a corresponding assigned time slot and the total number of time slots. Since the UMA stations access the media only during the assigned time slot, the carrier-sense stations can contend for access to the media after waiting a minimum interpacket gap (IPG) after sensing deassertion of the receive carrier on the media. The UMA stations may also be modified to attempt access of the media using Ethernet-compliant, carrier-sense multiple-access with collision detection (CSMA/CD) protocol when a current time slot corresponds to a mixed-use time slot.