摘要:
A system for minimizing congestion in a communication system is disclosed. The system comprises at least one ingress system for providing data. The ingress system includes a first free queue and a first flow queue. The system also includes a first congestion adjustment module for receiving congestion indications from the free queue and the flow queue. The first congestion adjustment module generates end stores transmit probabilities and performs per packet flow control actions. The system further includes a switch fabric for receiving data from the ingress system and for providing a congestion indication to the ingress system. The system further includes at least one egress system for receiving the data from the switch fabric. The egress system includes a second free queue and a second flow queue. The system also includes a second congestion adjustment module for receiving congestion indications from the second free queue and the second flow queue. The second congestion adjustment module generates and stores transmit probabilities and performs per packet flow control actions. Finally, the system includes a scheduler for determining the order and timing of transmission of packets out the egress system and to another node or destination. A method and system in accordance with the present invention provides for a unified method and system for logical connection of congestion with the appropriate flow control responses. The method and system utilizes congestion indicators within the ingress system, egress system, and the switch fabric in conjunction with a coarse adjustment system and fine adjustment system within the ingress device and the egress device to intelligently manage the system.
摘要:
Network processors commonly utilize DRAM chips for the storage of data. Each DRAM chip contains multiple banks for quick storage of data and access to that data. Latency in the transfer or the ‘write’ of data into memory can occur because of a phenomenon referred to as memory bank polarization. By a procedure called quadword rotation, this latency effect is effectively eliminated. Data frames received by the network processor are transferred to a receive queue (FIFO). The frames are divided into segments that are written into the memory of the DRAM in accordance with a formula that rotates the distribution of each segment into the memory banks of the DRAM.
摘要:
A Network Processor includes a Fat Pipe Port and a memory sub-system that provides sufficient data to satisfy the Bandwidth requirements of the Fat Pipe Port. The memory sub-system includes a plurality of DDR DRAMs controlled so that data is extracted from one DDR DRAM or simultaneously from a plurality of the DDR DRAMs. By controlling the DDR DRAMs so that the outputs provide data serially or in parallel, the data Bandwidth is adjustable over a wide range. Similarly, data is written serially into one DDR DRAM or simultaneously into multiple DDR DRAMs. As a consequence buffers with data from the same frame are written into or read from different DDR DRAMs.
摘要:
A method for packet reordering in a network processor, including the steps of processing packets, dividing the processed packets into a plurality of tiers and reordering the tiers independently from each other and collecting eligible packets from the plurality of tiers in a collector for forwarding. The method further includes the step of during the processing, determining the nominal packet processing time of each packet. The processed packets are divided into the plurality of tiers depending on the nominal packet processing time.
摘要:
A Network Processor (NP) includes a controller that allows maximum utilization of the memory. The controller includes a memory arbiter that monitors memory access requests from requesters in the NP and awards high priority requesters all the memory bandwidth requested per access to the memory. If the memory bandwidth requested by the high priority requester is less than the full memory bandwidth, the difference between the requested bandwidth and full memory bandwidth is assigned to lower priority requesters. By so doing every memory access utilizes the full memory bandwidth.
摘要:
Verifying subscriber host connectivity is disclosed. In some embodiments, a unicast address resolution protocol (ARP) request is sent to a subscriber host, and based at least in part on whether a response to the request is received from the subscriber host, it is determined whether the subscriber host remains connected to a network.
摘要:
Managing subscriber host information is disclosed. A new or updated information about a subscriber host is received. It is determined whether the subscriber host is associated with a multi-chassis peering. If it is determined that the subscriber host is associated with a multi-chassis peering, the new or updated information is propagated to a peer chassis associated with the multi-chassis peering.
摘要:
Associating hosts with subscriber and service based requirements is disclosed. An identifier is extracted from a DHCP or other network address lease communication associated with a subscriber host. The identifier is used to associate the subscriber host with a requirement that is subscriber based, service based, or both subscriber and service based. The subscriber host is included in a set of one or more subscriber hosts associated with the subscriber, the service, or both, as applicable to the requirement, and the requirement is required to be enforced collectively to the one or more subscriber hosts comprising the set.
摘要:
Managing subscriber host information is disclosed. A new or updated information about a subscriber host is received. It is determined whether the subscriber host is associated with a multi-chassis peering. If it is determined that the subscriber host is associated with a multi-chassis peering, the new or updated information is propagated to a peer chassis associated with the multi-chassis peering.
摘要:
Aggregating links across multiple chassis is disclosed. An indication that one or more local links are to be aggregated with one or more links on another chassis is received. Coordination with the other chassis is performed, via an inter-chassis control path, to present to a downstream equipment as an aggregated group of links the one or more local links and the one or more links on the other chassis.