摘要:
Mechanisms for processing of communications between data processing devices are provided. With the mechanisms of the illustrative embodiments, a set of techniques that enables sustaining media speed by distributing transmit and receive-side processing over multiple processing cores is provided. In addition, these techniques also enable designing multi-threaded network interface controller (NIC) hardware that efficiently hides the latency of direct memory access (DMA) operations associated with data packet transfers over an input/output (I/O) bus. Multiple processing cores may operate concurrently using separate instances of a communication protocol stack and device drivers to process data packets for transmission with separate hardware implemented send queue managers in a network adapter processing these data packets for transmission. Multiple hardware receive packet processors in the network adapter may be used, along with a flow classification engine, to route received data packets to appropriate receive queues and processing cores for processing.
摘要:
An Ethernet adapter is disclosed. The Ethernet adapter comprises a plurality of layers for allowing the adapter to receive and transmit packets from and to a processor. The plurality of layers include a demultiplexing mechanism to allow for partitioning of the processor. A Host Ethernet Adapter (HEA) is an integrated Ethernet adapter providing a new approach to Ethernet and TCP acceleration. A set of TCP/IP acceleration features have been introduced in a toolkit approach: Servers TCP/IP stacks use these accelerators when and as required. The interface between the server and the network interface controller has been streamlined by bypassing the PCI bus. The HEA supports network virtualization. The HEA can be shared by multiple OSs providing the essential isolation and protection without affecting its performance.
摘要:
Method and apparatus for implementing use of a network connection table. In one aspect, searching for network connections includes receiving a packet, and zeroing particular fields of connection information from the packet if a new connection is to be established. The connection information is converted to an address for a location in a direct table using a table access process. The direct table stores patterns and reference information for new and existing connections. The connection information is compared with at least one pattern stored in the direct table at the address to find reference information for the received packet.
摘要:
A mechanism is provided for sharing a communication used by a parser (parser path) in a network adapter of a network processor for sending requests for a process to be executed by an external coprocessor. The parser path is shared by processors of the network processor (software path) to send requests to the external processor. The mechanism uses for the software path a request mailbox comprising a control address and a data field accessed by MMIO for sending two types of messages, one message type to read or write resources and one message type to trigger an external process in the coprocessor and a response mailbox for receiving response from the external coprocessor comprising a data field and a flag field. The other processors of the network poll the flag until set and get the coprocessor result in the data field.
摘要:
A mechanism is provided for sharing a communication used by a parser (parser path) in a network adapter of a network processor for sending requests for a process to be executed by an external coprocessor. The parser path is shared by processors of the network processor (software path) to send requests to the external processor. The mechanism uses for the software path a request mailbox comprising a control address and a data field accessed by MMIO for sending two types of messages, one message type to read or write resources and one message type to trigger an external process in the coprocessor and a response mailbox for receiving response from the external coprocessor comprising a data field and a flag field. The other processors of the network poll the flag until set and get the coprocessor result in the data field.
摘要:
A system and method in accordance with the present invention allows for an adapter to be utilized in a server environment that can accommodate both a 10 G and a 1 G source utilizing the same pins. This is accomplished through the use of a high speed serializer/deserializer (high speed serdes) which can accommodate both data sources. The high speed serdes allows for the use of a relatively low reference clock speed on the NIC to provide the proper clocking of the data sources and also allows for different modes to be set to accommodate the different data sources. Finally the system allows for the adapter to use the same pins for multiple data sources.
摘要:
A system and method for computing a blind checksum includes a host Ethernet adapter (HEA) with a system for receiving a packet. The system determines whether or not the packet is in Internet protocol version four (IPv4). If the packet is not in IPv4, the system computes the checksum of the packet. If the packet is in IPv4, the system determines whether the packet is in transmission control protocol (TCP) or user datagram protocol (UDP). If the packet is not in either of TCP or UDP the system attaches a pseudo-header to the packet and computes the checksum of the packet based on the pseudo-header and the IPv4 standard.
摘要:
Providing communications between operating system partitions and a computer network. In one aspect, an apparatus for distributing network communications among multiple operating system partitions includes a physical port allowing communications between the network and the computer system, and logical ports associated with the physical port, where each logical port is associated with one of the operating system partitions. Each of the logical ports enables communication between a physical port and the associated operating system partition and allows configurability of network resources of the system. Other aspects include a logical switch for logical and physical ports, and packet queues for each connection and for each logical port.
摘要:
A system and method for parsing, filtering, and computing the checksum in a host Ethernet adapter (HEA) that is coupled to a host. The method includes receiving a part of a frame, wherein a plurality of parts of a frame constitute a entire frame. Next, parse the part of a frame before receiving the entire frame. The HEA computes a checksum of the part of a frame. The HEA filters the part of a frame based on a logical, port-specific policy and transmits the checksum to the host.
摘要:
A host Ethernet adapter (HEA) and method of managing network communications is provided. The HEA includes a host interface configured for communication with a multi-core processor over a processor bus. The host interface comprises a receive processing element including a receive processor, a receive buffer and a scheduler for dispatching packets from the receive buffer to the receive processor; a send processing element including a send processor and a send buffer; and a completion queue scheduler (CQS) for dispatching completion queue elements (CQE) from the head of the completion queue (CQ) to threads of the multi-core processor in a network node mode. The method comprises operatively coupling an Ethernet adapter to a multi-core processor system via a processor bus, selectively assigning a first plurality of packets to a first queue pair for servicing in an endpoint mode, running a device driver on the multi-core processing system, the device driver controlling the servicing of the first queue pair by dispatching the first plurality of packets to only one processor core of the multi-core processor system, selectively assigning a second plurality of packets to a second queue pair for servicing in a network node mode; and the Ethernet adapter controlling the servicing of the second queue pair by dispatching the second plurality of packets to multiple processor threads.