Abstract:
This invention is an implementation of a host channel adapter and method for transferring packet data over a network. When packets are distributed by a packet-switching system, a control unit and a plurality of header buffers allow the packet transmission to be carried out efficiently in executing the actions of reading and moving the packets. This reduces repetitions in reading and moving the packets, which enables the host channel adapter to use the bandwidth of the memory efficiently through the help of the control unit.
Abstract:
A network switch for network communications includes a first data port interface. The first data port interface supports a plurality of data ports transmitting and receiving data at a first data rate. A second data port interface is provided; the second data port interface supports a plurality of data ports transmitting and receiving data at a second data rate. A CPU interface is provided, with the CPU interface configured to communicate with a CPU. An internal memory is provided, and communicates with the first data port interface and the at least one second data port interface. A memory management unit is provided, and includes an external memory interface for communicating data from at least one of the first data port interface and the second data port interface and an external memory. A communication channel is provided, with the communication channel communicating data and messaging information between the first data port interface, the second data port interface, the internal memory, and the memory management unit. A plurality of semiconductor-implemented lookup tables are provided, with the lookup tables including an address resolution lookup/layer three lookup, rules tables, and VLAN tables. One of the data port interfaces is configured to update the address resolution table based on newly learned layer to addresses. An update to an address table associated with an initial data port interface of the first and second data port interfaces results in the initial data port interface sending a synchronization signal to the other address resolution tables in the network switch. As a result, all address resolution tables on the network switch are synchronized on a per entry basis.
Abstract:
A packet processing apparatus comprises a programmable hardware discriminator for receiving incoming packets, and selecting bits from any part of the incoming packets, a decision table for storing information relating to how the packets are to be processed, programmable hardware searching logic for accessing the information in the table according to the selected bits, and a packet handler for processing the packets according to the result of the access. Since many networking processing tasks can be broken down into bit selection and table searching, this generic type of arrangement will suit a wide variety of applications. It facilitates developing logic directly in hardware which can reduce the effort needed to convert a working prototype into a product ready for use in the field, e.g. for handling new protocol components.
Abstract:
A flow control method in an Ethernet switch being a downstream device using a full duplex mode in a packet switched network of the type having a plurality of input ports connected to a plurality of Ethernet switches being upstream devices and a common memory for storing packet data received from each input port and for transmitting packet data read from the common memory to a destination upstream device. In such flow control method, the buffer state of the common memory is first determined. If the buffer state is buffer-full, a pause frame including a predetermined pause time is transmitted to the plurality of Ethernet switches being upstream devices and an expected pause time of the upstream devices is counted. The buffer state of the common memory is determined again if the expected pause time expires. If the buffer state is buffer-full, the pause frame including the predetermined pause time is re-transmitted to the plurality of Ethernet switches being upstream devices and the expected pause time of the upstream devices is initiated.
Abstract:
A high-speed memory is provided, the memory having a write port and a read port and comprised of the following: a plurality of N memory modules for storing fixed size cells, which are segments of a variable size packet divided into X cells, the X cells being grouped into nullX/Nnull groups of cells; a read-write control block comprising a means for receiving cells from the write port and storing each cell, which belongs to the same group, in a selected different one of the N memory modules at the same memory address (the group address); a multi-cell pointer (MCP) storage for storing an MCP for said group of cells (an associated MCP) at an MCP address, the MCP having N memory module identifiers to record the order in which cells of said group of cells are stored in the N memory modules; and the MCP address being the same as the group address. Corresponding methods for storing and retrieving cells, single cell packets, segmented variable size packets in such memory are also provided.
Abstract:
A high-speed packet memory is provided, the memory having a write port and a read port and comprised of the following: a plurality of N memory modules for storing fixed size cells, which are segments of a variable size packet divided into X cells, the X cells being grouped into nullX/Nnull groups of cells; a read-write control block comprising a means for receiving cells from the write port and storing each cell, which belongs to the same group, in a selected different one of the N memory modules at the same memory address (the group address); a multi-cell pointer (MCP) storage for storing an MCP for said group of cells (an associated MCP) at an MCP address, the MCP having N memory module identifiers to record the order in which cells of said group of cells are stored in the N memory modules; and the MCP address being the same as the group address. Corresponding methods for storing cells and/or storing and retrieving variable size packet in such memory are also provided.
Abstract:
The present invention discloses a scalable flow-control mechanism. In accordance with the present invention, there is provided a switching device for transporting packets of data, the packets being received at the switching device based on flow-control information, the device comprising a memory for storing the packets, a credit counter coupled to the memory for counting a credit number of packets departing from the memory, and a scheduler unit coupled to the credit counter for deriving the flow-control information in response to the credit number. Moreover, a switching apparatus and a method for generating flow-control information is disclosed.
Abstract:
A central queue-based packet switch contains multiple input ports and multiple output ports coupled by a central queue path and a bypass path. The central queue has only shared memory and processing dynamically switches from transfer of message portions across the central queue path to the bypass path whenever a next message portion is identified by an output port as a critical portion. Upon transfer of message forwarding to the bypass path, subsequent message portions are forwarded across the bypass path unless the output port signals for transfer of the message portions back through the central queue path. Dynamic switching of message transfer from the central queue path to the bypass path is accomplished irrespective of whether contention exists for the output port.
Abstract:
A device and method for filtering network traffic is provided utilizing multiple filter tables. A first filter table is maintained in the form of a balanced binary tree that is manipulated by two processors in order to filter traffic between network segments. By initially filtering traffic based upon information contained within the balanced binary tree table, the processing and resource load on the system is significantly reduced. Traffic whose source or destination is not contained within the balanced binary tree table is forwarded to a second processor for filtering based upon a second filter table.
Abstract:
A switch and a process of operating a switch are described where a received data frame is stored into memory in a systematic way. In other words, a location is selected in the memory to store the received data frame using a non-random method. By storing the received data frame in this way, switches that employ this system and method increase bandwidth by avoiding delays incurred in randomly guessing at vacant spaces in the memory. The received data frame is stored until a port that is to transmit the received data frame is available. Throughput is further improved by allowing the received data frames to be stored in either contiguous or non-contiguous memory locations.