Abstract:
A network device includes a main storage memory and a queue handling component. The main storage memory includes multiple memory banks which store a plurality of packets for multiple output queues. The queue handling component controls write operations to the multiple memory banks and controls read operations from the multiple memory banks, where the read operations for at least one of the multiple output queues alternates sequentially between the each of the multiple memory banks, and where the read operations and the write operations occur during a same clock period on different ones of the multiple memory banks.
Abstract:
According to some embodiments, it may be determined, at a first processing element of a device with a plurality of processing elements, that first data is to be transmitted in association with a first network connection. A first entry associated with the first data may then be stored into a first of a plurality of transmit queues. It may subsequently be determined, at a second processing element of the device, that second data is to be transmitted in association with the first network connection. A second entry associated with the second data may then be stored into a second of the plurality of transmit queues.
Abstract:
Described herein is a method and system for directing outgoing data packets from packet engines to a transmit queue of a NIC in a multi-core system, and a method and system for directing incoming data packets from a receive queue of the NIC to the packet engines. Packet engines store outgoing traffic in logical transmit queues in the packet engines. An interface module obtains the outgoing traffic and stores it in a transmit queue of the NIC, after which the NIC transmits the traffic from the multi-core system over a network. The NIC receives incoming traffic and stores it in a NIC receive queue. The interface module obtains the incoming traffic and applies a hash to a tuple of each obtained data packet. The interface module then stores each data packet in the logical receive queue of a packet engine on the core identified by the result of the hash.
Abstract:
A queue selection method for controlling selection of many queues without increasing the circuit scale is provided. Queues are organized into groups, and each group is created as a tree structure with a plurality of steps, and a queue is selected by selecting a group of each step. By this, even if the number of queues is enormous, it is sufficient to provide registers for managing the presence of packets only for the number of groups selected in each step, and it becomes unnecessary to provide registers for all of the queues, so an increase of registers can be suppressed even if the number of queues increases. It is preferable that group selection in each step is performed in parallel independently from pipeline processing so as to maintain high-speed operation.
Abstract:
A memory bank has a plurality of memories. In an embodiment, a forward unit applies logical memory addresses to the memory bank in a forward twofold access order, a backward unit applies logical memory addresses to the memory bank in a backward twofold access order, and a half butterfly network (at least half, and barrel shifters in 8-tuple embodiments) is disposed between the memory bank and the forward unit and the backward unit. A set of control signals is generated which are applied to the half or more butterfly network (and to the barrel shifters where present) so as to access the memory bank with an n-tuple parallelism in a linear order in a first instance, and a quadratic polynomial order in a second instance, where n=2, 4, 8, 16, 32, . . . . This access is for any n-tuple of the logical addresses, and is without memory access conflict. In this manner memory access may be controlled data decoding.
Abstract:
A method and apparatus for in-line processing a data packet while routing the packet through a router in a system transmitting data packets between a source and a destination over a network including the router. The method includes receiving the data packet and pre-processing layer header data for the data packet as the data packet is received and prior to transferring any portion of the data packet to packet memory. The data packet is thereafter stored in the packet memory. A routing through the router is determined including a next hop index describing the next connection in the network. The data packet is retrieved from the packet memory and a new layer header for the data packet is constructed from the next hop index while the data packet is being retrieved from memory. The new layer header is coupled to the data packet prior to transfer from the router.
Abstract:
A shared memory switch is provided for storing and retrieving data from BlockRAM (BRAM) memory of a PLD. A set of class queues maintain a group of pointers that show the location of the incoming “cells” or “packets” stored in the memory in the switch based on the time of storage in the BRAM. A non-blocking memory architecture is implemented that allows for a scalable N×N memory structure to be created (N=number of input and output ports). A write controller stripes the data across this N×N memory to prevent data collisions on read in or read out of data. The data is scheduled for read out of this N×N shared memory buffer based on priorities or classes in the class queues, with priorities being set by a user, and then data is read out from the BRAM.
Abstract:
An assignment constraint matrix method and apparatus used in assigning work, such as data packets, from a plurality of sources, such as data queues in a network processing device, to a plurality of sinks, such as processor threads in the network processing device. The assignment constraint matrix is implemented as a plurality of qualifier matrixes adapted to operate simultaneously in parallel. Each of the plurality of qualifier matrixes is adapted to determine sources in a subset of supported sources that are qualified to provide work to a set of sinks based on assignment constraints. The determination of qualified sources may be based sink availability information that may be provided for a set of sinks on a single chip or distributed on multiple chips.
Abstract:
The disclosure is a network interface controller (NIC) capable of sharing buffers, which is coupled to a host and a network to make the network connection. The NIC includes a transmitting buffer, a transmitting controller, a receiving buffer, and a receiving controller. The transmitting controller controls the transmitting buffer to transmit the transmission data provided by the host to the network. The receiving controller controls the receiving buffer to transmit the reception data received from the network to the host, and determines a storage capacity of the receiving buffer. When the storage capacity is smaller than a set value, the receiving controller transmits a request signal to the transmitting controller, the transmitting controller generates a response signal according to the request signal and a status signal corresponding to the transmitting buffer, and the receiving controller controls whether reception data is stored in the transmitting buffer according to the response signal.
Abstract:
A method of transmitting content to a wireless display device is disclosed. The method may include receiving multimedia data, encoding the multimedia data, and writing encoded multimedia data into a first predetermined memory location of a shared memory. Further, the method may include encapsulating the encoded multimedia data and writing encapsulation data into a second predetermined memory location of the shared memory. The method may also include calculating error control encoding and writing the error control encoding into a third predetermined memory location of the shared memory. Further, the method may include transmitting the encoded multimedia data, the encapsulation data, and the error control encoding to the wireless display device.