Abstract:
In one embodiment, a method for efficiently classifying packets for a multi-processor/mutli-thread environment is provided. The method initiates with receiving a packet. Then, header information is extracted form the received packet. Next, a first hash value is calculated. Then, a field of interest in a lookup table is determined from the first hash value. Next, a second hash value is calculated. Then, the second hash value is compared to stored hash values in the field of interest of the lookup table to determine a match between the second hash value and one of the values in the field of interest of the lookup table. If there is a match, the received packet is transmitted to a processor corresponding to the one of the values in the row location of the lookup table. A network interface card and a system for efficiently classifying packets in a multicore/multithread environment are also provided.
Abstract:
A full duplex communication processor (20) simultaneously sends and receives frames of data and commands. Separate transmit (32) and receive (30) protocol engines are controlled by separate sequencers. This enables frames of data to be received and transmitted simultaneously without involving the CPU (40) on a frame-by-frame basis.
Abstract:
A network node (5) including a line card (20) for packet-based data communications is disclosed. The line card (20) includes a transmit FIFO buffer (24T) and a receive FIFO buffer (24R), for buffering communications within the line card (20). Each of the buffers (24T, 24R) operate in a dual-port fashion, receiving asynchronous read and write requests, for reading data words from and writing data words to the buffers (24T, 24R). The buffers (24T, 24R) each include a memory array (45) of conventional single port random access memory cells, for example static RAM cells. Clock cycles are assigned by the buffers (24T, 24R) as internal read and internal write cycles, in alternating fashion. A write buffer (42) receives input data words, and schedules a double-data-word write to the memory array (45) upon receiving a pair of input data words, in the next internal write cycle. A read request buffer (44) receives read strobes, or read enable signals, from a downstream function, and upon receiving two such strobes, schedules the read of a double-data-word from the memory array (45). By converting the asynchronous read and write requests into scheduled reads and writes, respectively, the buffers (24T, 24R) operate as dual-port FIFO buffers.
Abstract:
Disclosed herein is a method and apparatus for dynamically controlling data flow on a bi-directional data bus (120). Windows of time on the bus are divided between input, output, and pointer transactions. The number of input transactions relative to the number of output transactions is dynamically determined as a function of an input/output bias factor. Input transactions are written to a plurality of input queues (IQs) over the bus. The IQ receiving an input transaction is selected at least in part according to the occupancies of the IQs relative to a threshold occupancy. The number of output transactions allocated to an OQ during a window is determined as a function of that OQ's occupancy. Pointer transactions comprise reading or writing two copies of the pertinent pointers to prevent pointer corruption resulting from simultaneous pointer read/write accesses.
Abstract:
The IEEE1394 bus communication protocol has three layers: physical layer, link layer, and transaction layer. A link layer IC implements the interface to an external application and prepares data for sending on the bus, or interprets incoming data packets from the IEEE1394 bus. A physical layer IC implements the direct electrical connection to the bus and controls many functions including arbitration for sending data on the bus. According to the invention the capacity of the on-chip memory becomes assigned in a flexible way in order to be able to meet the requirements for any specific service. Further, the on-chip memory is prevented from storing data packets containing transmission errors by CRC checking on the fly header data and other data. This is performed for asynchronous data packets as well as isochronous data packets, and allows to have a minimum on-chip memory capacity only.
Abstract:
The invention relates to a method and a device for accessing data of a message memory (300) of a communication component (100) by inputting data into or outputting data from the message memory (300). The message memory is linked with a buffer memory unit (201 and 202) and the data are transmitted to the message memory in a first direction of transmission and from the message memory in a second direction of transmission. The buffer memory unit comprises an input buffer memory (201) in the first direction of transmission and an output buffer memory (202) in the second direction of transmission. The input buffer memory and the output buffer memory each are divided up into a partial buffer memory (400, 701) and a shaded memory (401, 700) for the partial buffer memory. The inventive method is characterized by the following steps: input of data into the respective partial buffer memory, exchanging the access to partial buffer memory and shaded memory so that subsequent data can be input into the shaded memory while the input data are already output from the partial buffer memory in the designated direction of transmission.
Abstract:
A Digital Subscriber Line [DSL] telecommunication device comprising at least one common memory (CM) that is shared between circuits (FFT, Demapper) of the downstream path and corresponding circuits (Mapper, IFFT) of the upstream path. The shared or common memory (CM) advantageously replaces the known two distinct memories (DM, UM) generally used, one for the downstream path and the other for the upstream path. The single common memory (CM) is particularly adapted to Very High Speed Digital Subscriber Line [VDSL-, VDSL or VDSL+] devices where the two downstream frequency ranges (DF1, DF2) are separated by an upstream frequency range (UF1); and where a second upstream frequency range (UF2) may exist. The size of the common memory shared by the fourth circuits is slightly larger (2800 carriers of 16 bits) than one (2048 carriers of 16 bits) of the known two memories interfacing each only two circuits of a same path. However, the size of this common memory is smaller than the sum (2 x 2048 carriers of 16 bits) of these two distinct memories.
Abstract:
A switch apparatus (10) for optimizing the transfer of data packets between a plurality of local area networks (LANs (12, 14, 16)). Apparatus of the present invention are comprised of multiple controllers (23), e.g., a receive controller (24), and a transmit controller (25), which share common resources including a first memory (a packet memory (20)) which stores the data packets, a second memory (a descriptor memory (22)) which stores pointers to the stored data packets, and buffered data paths (preferably using FIFO buffers (178, 108)). The independent controllers (25, 24) operate essentially concurrently for most tasks while interleaving their use of the shared resources (20, 22). Consequently, embodiments of the present invention can simultaneously receive and transmit data across multiple LAN data ports (18a, 18b, 18N) (e.g., 28 Ethernet ports comprised of 10/100 and/or 10 Mbps ports).
Abstract:
A communication controller (111) for handling and processing data packets received from a large number of communication channels (181 - 188). The communication controller (111) comprising of: a processor (160) for processing data; a serial interface (28), coupled to the communication channels (181 - 188). a multi channel controller (100, 100') coupled to the serial interface (28) and the processor (160), for interfacing between the communication channels (181 - 188) and the processor (160). The communication channels (181 - 188) and the serial interface (28) send and receive data packets. The processor (160) sends, receives and processes data words. The multi channel controller (100) receives data packets from the serial interface (28), concatenates data packets and sends data words to the processor (160). The multi channel controller (100) receives data words from the processor (160), and transmits data packets to the serial interface (28).
Abstract:
A communication processor (22) sends and receives frames of data and commands. Transmit and receive protocol engine (32 and 30) is controlled by host driver software (38) which utilizes predetermined bits to indicate which frame is the last frame in a series of frames. This information is then placed in the transmit frame before it is sent.