Abstract:
A mechanism for offloading the management of send queues in a split socket stack environment, including efficient split socket queue flow control and TCP/IP retransmission support. An Upper Layer Protocol (ULP) creates send work queue entries (SWQEs) for writing to the send work queue (SWQ). The Internet Protocol Suite Offload Engine (IPSOE) is notified of a new entry to the SWQ and it subsequently reads this entry that contains pointers to the data that is to be transmitted. After the data is transmitted and acknowledgments are received, the IPSOE creates a completion queue entry (CQE) that is written into the completion queue (CQ). The flow control between the ULP and the IPSOE is credit based. The passing of CQ credits is the only explicit mechanism required to manage flow control of both the SWQ and the CQ between the ULP and the IPSOE.
Abstract:
A system and method in accordance with the present invention allows for an adapter to be utilized in a server environment that can accommodate both a 10 G and a 1 G source utilizing the same pins. This is accomplished through the use of a high speed serializer/deserializer (high speed serdes) which can accommodate both data sources. The high speed serdes allows for the use of a relatively low reference clock speed on the NIC to provide the proper clocking of the data sources and also allows for different modes to be set to accommodate the different data sources. Finally the system allows for the adapter to use the same pins for multiple data sources.
Abstract:
Providing communications between operating system partitions and a computer network. In one aspect, an apparatus for distributing network communications among multiple operating system partitions includes a physical port allowing communications between the network and the computer system, and logical ports associated with the physical port, where each logical port is associated with one of the operating system partitions. Each of the logical ports enables communication between a physical port and the associated operating system partition and allows configurability of network resources of the system. Other aspects include a logical switch for logical and physical ports, and packet queues for each connection and for each logical port.
Abstract:
Systems and methods for implementing multi-frame control blocks in a network processor are disclosed. Embodiments include systems and methods to reduce long latency memory access to less expensive memory such as DRAM. As a network processor in a network receives packets of data, the network processor forms a frame control block for each packet. The frame control block contains a pointer to a memory location where the packet data is stored, and is thereby associated with the packet. The network processor associates a plurality of frame control blocks together in a table control block that is stored in a control store. Each table control block comprises a pointer to a memory location of a next table control block in a chain of table control blocks. Because frame control blocks are stored and accessed in table control blocks, less frequent memory accesses may be needed to keep up with the frame rate of packet transmission.
Abstract:
A network processor dataflow chip and method for flexible dataflow are provided. The dataflow chip comprises a plurality of on-chip data transmission and scheduling circuit structures. The data transmission and scheduling circuit structures are selected responsive to indicators. Data transmission circuit structures may comprise selectable frame processing and data transmission functions. Selectable frame processing may comprise cut and paste, full dispatch and store and dispatch frame processing. Scheduling functions include full internal scheduling, calendar scheduling in communication with an external scheduler, and external calendar scheduling. In another aspect of the present invention, data transmission functions may comprise low latency and normal latency external processor interfaces for selectively providing privileged access to dataflow chip resources.
Abstract:
Apparatus and method for storing network frame data which is to be modified. A plurality of buffers stores the network data which is arranged in a data structure identified by a frame control block and buffer control block. A plurality of buffer control blocks associated with each buffer storing the frame data establishes a sequence of the buffers. Each buffer control block has data for identifying a subsequent buffer within the sequence. The first buffer is identified by a field of a frame control block as well as the beginning and ending address of the frame data. The frame data can be modified without rewriting the data to memory by altering the buffer control block and/or frame control block contents without having to copy or rewrite the data in order to modify it.
Abstract:
A mechanism for offloading the management of send queues in a split socket stack environment, including efficient split socket queue flow control and TCP/IP retransmission support. As consumers initiate send operations, send work queue entries (SWQEs) are created by an Upper Layer Protocol (ULP) and written to the send work queue (SWQ). The Internet Protocol Suite Offload Engine (IPSOE) is notified of a new entry to the SWQ and it subsequently reads this entry that contains pointers to the data that is to be transmitted. After the data is transmitted and acknowledgments are received, the IPSOE creates a completion queue entry (CQE) that is written into the completion queue (CQ). After the CQE is written, the ULP subsequently processes the entry and removes it from the CQE, freeing up a space in both the SWQ and CQ. The number of entries available in the SWQ are monitored by the ULP so that it does not overwrite any valid entries. Likewise, the IPSOE monitors the number of entries available in the CQ, so as not overwrite the CQ. The flow control between the ULP and the IPSOE is credit based. The passing of CQ credits is the only explicit mechanism required to manage flow control of both the SWQ and the CQ between the ULP and the IPSOE.
Abstract:
Systems and methods for adaptively mapping system memory address bits into an instruction tag and an index into the cache are disclosed. More particularly, hardware and software are disclosed for observing collisions that occur for a given mapping of system memory bits into a tag and an index. Based on the observations, an optimal mapping may be determined that minimizes collisions.
Abstract:
A pipeline configuration is described for use in network traffic management for the hardware scheduling of events arranged in a hierarchical linkage. The configuration reduces costs by minimizing the use of external SRAM memory devices. This results in some external memory devices being shared by different types of control blocks, such as flow queue control blocks, frame control blocks and hierarchy control blocks. Both SRAM and DRAM memory devices are used, depending on the content of the control block (Read-Modify-Write or ‘read’ only) at enqueue and dequeue, or Read-Modify-Write solely at dequeue. The scheduler utilizes time-based calendars and weighted fair queueing calendars in the egress calendar design. Control blocks that are accessed infrequently are stored in DRAM memory while those accessed frequently are stored in SRAM.
Abstract:
A system for performing a lookup for a packet in a computer network are disclosed. The packet includes a header. The system includes a parser, a lookup engine coupled with the parser, and a processor coupled with the lookup engine. The parser parses the packet for the header prior to receipt of the packet being completed. The lookup engine performs a lookup for the header and returns a resultant. In one aspect, the lookup includes performing a local lookup of a cache that includes resultants of previous lookups. The processor processes the resultant.