摘要:
A packet forwarding node for a computer network comprises at least one receiving module and at least one output module including packet list (21) for maintaining a list of packets to be transmitted therefrom. The time for which a packet remains in the node is determined by grouping the packets into groups or "buckets" which are created at regular intervals, each bucket containing packets arriving within the same time interval, and keeping track of the age of each bucket. A bucket counter (33) counts the total number of buckets in existence, so indicating the age of the oldest packet. This counter is incremented by 1 at regular intervals and decremented by 1 each time the oldest bucket is emptied (or found to be empty). A bucket list shift register (30) has its contents shifted at each change of time interval, and its the bottom stage accumulates the number of packets arriving in a time interval, and an overflow accumulator (31) accumulates counts shifted out of its top end. The bucket list shift register may comprise a plurality of sections each of which is shifted at an exact submultiple of the rate of shifting of the previous section, the bottom stage of each section accumulating counts shifted out of the previous section. In an alternative embodiment, bucket boundary markers are inserted into the packet list at each change of time interval.
摘要:
A scheme for efficient implementation of workload partitioning between separate receive and transmit processors is provided so that a message can be effectively moved through a multiprocessor router. Generally, each receiving processor collects, into a digest, information relating to network protocol processing of a particular message, obtained via sequential byte processing of the message at the time of reception of the message. The information placed into the digest is information that is necessary for the completion of the processing tasks to be performed by the processor of the transmitting line card. The digest is passed to the transmit processor through a buffer exchange between the receive and transmit processors. The transmit processor reads the digest before processing of the related message for transmission and uses the information in the network protocol processing of the message. Thus, the transmit processor does not have to "look ahead" to bytes of the message needed to complete certain processing functions already completed by the receive processor and does require extra buffering and/or memory bandwidth to make the modifications to the message.
摘要:
A known congestion avoidance system for computer networks detects congestion at a node output port if the average queue length (integral) over the last congestion cycle plus the current (incomplete) cycle exceeds a fixed constant (taken as 1). (A congestion cycle is a period for which the queue length is 1 or more plus the following period for which the queue length is 0.) The time of arrival or departure of a message is stored at 21, the interval from the previous event is calculated at 22 and 23, the length of the current cycle is incremented at 25 by adding in the interval just determined, and the queue length at 26 is incremented or decremented by 1. The running integral for the current cycle is updated by having added into it the product formed at 27 of the interval since the last event (stored at 23) and the current queue length. The integrals for the current and previous cycles (stored at 24 and 30) are added and the lengths of those two cycles (stored at 29 and 31) are added, and the first sum divided at 34 by the second to obtain a grand average queue length. If that exceeds a preset value, then a congestion bit is set in messages leaving that node output port.In the present system, the running queue length average (in 29') is maintained by adding (at 28') the queue length (in 26') into the average at regular intervals determined by timer ticks (from 60) (thus using integer addition instead of integer multiplication), and the grand average compared with the preset value by comparing (at 61) the total of the queue length averages with the total of the cycle periods (thus using integer addition and comparison instead of floating point operation).
摘要:
The present invention provides an interlock scheme for use between a line card and an address recognition apparatus. The interlock scheme reduces the total number of read/write operations over a backplane bus coupling the line card to the address recognition apparatus required to complete a request/response transfer. Thus, the line card and address recognition apparatus are able to perform a large amount of request/response transfers with a high level of system efficiency. Generally, the interlocking scheme according to the present invention merges each ownership information storage location into the location of the request/response memory utilized to store the corresponding request/response pair to reduce data transfer traffic over the backplane bus. According to another feature of the interlock scheme of the present invention, each of the line card and the address recognition engine includes a table for storing information relating to a plurality of database specifiers. Each of the database specifiers contains control information for the traversal of a lookup database used by the address recognition apparatus. At the time the processor of a line card generates a request for the address recognition apparatus, it will analyze the protocol type information contained in the header of a data packet. The processor will utilize the protocol type information as a look-up index to its table of database specifiers for selection of one of the database specifiers. The processor will then insert an identification of the selected database specifier into the request with the network address extracted from the data packet.
摘要:
The present invention is directed to an address recognition apparatus including an address recognition engine coupled to a look-up database. The look-up database is arranged to store network information relating to network addresses. The look-up database includes a primary database and a secondary database. The address recognition engine accepts as an input a network address for which network information is required. The address recognition engine uses the network address as an index to the primary database. The primary database comprises a multiway tree node structure (TRIE) arranged for traversal of the nodes as a function of preselected segments of the network address and in a fixed sequence of the segments to locate a pointer to an entry in the secondary database. The entry in the secondary database pointed to by the primary database pointer contains the network information corresponding to the network address. The address recognition engine includes a table for storing a plurality of database specifiers. Each of the database specifiers contains control information for the traversal of the primary and secondary databases. In addition, each of the nodes in the primary database and each of the entries in the secondary database is provided with control data structures that are programmable to control the traversal of the database.
摘要:
A method and apparatus for aliasing an address for a location in a memory system. The aliasing permits an address generating unit to access a memory block of variable size based upon an address space of fixed size so that the size of the memory block can be changed without changing the address generating software of the address generating unit. The invention provides an address aliasing device arranged to receive an address from the address generating unit. The address aliasing device includes a register that stores memory block size information. The memory block size information is read by the address aliasing device and decoded to provide bit information representative of the size of the memory block. The address aliasing device logically combines the bit information with appropriate corresponding bits of the input address to provide an alias address that is consistent with the size of the memory block.
摘要:
The present invention is directed to a buffer swapping scheme to communicate a message from a first device to a second device wherein a pointer to a free buffer is returned to the first device by the second device as a condition for the first device to pass a pointer to a buffer containing a message intended for the second device.
摘要:
An apparatus and method as described for constructing a repair path for use in the event of failure of an inter-routing domain connection between respective components in first and second routing domains of a data communications network. The apparatus is arranged to assign a propagatable repair address for use in the event of failure of the inter-routing domain connection and to propagate the repair address via data communications network components other than the inter-routing domain connection.
摘要:
An apparatus for providing reachability in a routing domain of a data communications network having as components nodes and links therebetween for a routing domain external destination address is provided. The apparatus is arranged to advertise destination address reachability internally to nodes in the routing domain and associate a reachability category with the internal advertisement of the destination address reachability.
摘要:
An apparatus for providing reachability in a routing domain of a data communications network having as components nodes and links therebetween for a routing domain—external destination address is described. The apparatus is arranged to advertise destination address reachability internally to nodes in the routing domain and associate a reachability category with said internal advertisement of said destination address reachability.