摘要:
A Network Interface device (NI device) coupled to a host computer receives a multi-packet message from a network (for example, the Internet) and DMAs the data portions of the various packets directly into a destination in application memory on the host computer. The address of the destination is determined by supplying a first part of the first packet to an application program such that the application program returns the address of the destination. The address is supplied by the host computer to the NI device so that the NI device can DMA the data portions of the various packets directly into the destination. In some embodiments the NI device is an expansion card added to the host computer, whereas in other embodiments the NI device is a part of the host computer.
摘要:
A network interface device provides a fast-path that avoids most host TCP and IP protocol processing for most messages. The host retains a fallback slow-path processing capability. In one embodiment, generation of a response to a TCP/IP packet received onto the network interface device is accelerated by determining the TCP and IP source and destination information from the incoming packet, retrieving an appropriate template header, using a finite state machine to fill in the TCP and IP fields in the template header without sequential TCP and IP protocol processing, combining the filled-in template header with a data payload to form a packet, and then outputting the packet from the network interface device by pushing a pointer to the packet onto a transmit queue. A transmit sequencer retrieves the pointer from the transmit queue and causes the corresponding packet to be output from the network interface device.
摘要:
An intelligent network interface card (INIC) or communication processing device (CPD) works with a host computer for data communication. The device provides a fast-path that avoids protocol processing for most messages, greatly accelerating data transfer and offloading time-intensive processing tasks from the host CPU. The host retains a fallback processing capability for messages that do not fit fast-path criteria, with the device providing assistance such as validation even for slow-path messages, and messages being selected for either fast-path or slow-path processing. A context for a connection is defined that allows the device to move data, free of headers, directly to or from a destination or source in the host. The context can be passed back to the host for message processing by the host. The device contains specialized hardware circuits that are much faster at their specific tasks than a general purpose CPU. A preferred embodiment includes a trio of pipelined processors devoted to transmit, receive and utility processing, providing full duplex communication for four Fast Ethernet nodes.
摘要:
A host CPU runs a network protocol processing stack that provides instructions not only to process network messages but also to allocate processing of certain network messages to a specialized network communication device, offloading some of the most time consuming protocol processing from the host CPU to the network communication device. By allocating common and time consuming network processes to the device, while retaining the ability to handle less time intensive and more varied processing on the host stack, the network communication device can be relatively simple and cost effective. The host CPU, operating according to instructions from the stack, and the network communication device together determine whether and to what extent a given message is processed by the host CPU or by the network communication device.
摘要:
A system for protocol processing in a computer network has a TCP/IP Offload Network Interface Device (TONID) associated with a host computer. The TONID provides a fast-path that avoids protocol processing for most large multi-packet messages, greatly accelerating data communication. The TONID also assists the host for those message packets that are chosen for processing by host software layers. A communication control block for a message is defined that allows DMA controllers of the TONID to move data, free of headers, directly to or from a destination or source in the host. The context is stored in the TONID as a communication control block (CCB) that can be passed back to the host for message processing by the host. The TONID contains specialized hardware circuits that are much faster at their specific tasks than a general purpose CPU. A preferred embodiment includes a trio of pipelined processors with separate processors devoted to transmit, receive and management processing, with full duplex communication for four fast Ethernet nodes.
摘要:
A system for protocol processing in a computer network has an intelligent network interface card (INIC) or communication processing device (CPD) associated with a host computer. The CPD provides a fast-path that avoids protocol processing for most large multipacket messages, greatly accelerating data communication. The CPD also assists the host CPU for those message packets that are chosen for processing by host software layers. A context for a message is defined that allows DMA controllers of the CPD to move data, free of headers, directly to or from a destination or source in the host. The context can be stored as a communication control block (CCB) that is controlled by either the CPD or by the host CPU. The CPD contains specialized hardware circuits that process media access control, network and transport layer headers of a packet received from the network, saving the host CPU from that processing for fast-path messages.
摘要:
A Network Interface device (NI device) coupled to a host computer receives a multi-packet message from a network (for example, the Internet) and DMAs the data portions of the various packets directly into a destination in application memory on the host computer. The address of the destination is determined by supplying a first part of the first packet to an application program such that the application program returns the address of the destination. The address is supplied by the host computer to the NI device so that the NI device can DMA the data portions of the various packets directly into the destination. In some embodiments the NI device is an expansion card added to the host computer, whereas in other embodiments the NI device is a part of the host computer.
摘要:
A system for protocol processing in a computer network has an intelligent network interface card (INIC) or communication processing device (CPD) associated with a host computer. The INIC provides a fast-path that avoids protocol processing for most large multi-packet messages, greatly accelerating data communication. The INIC also assists the host for those message packets that are chosen for processing by host software layers. A communication control block for a message is defined that allows DMA controllers of the INIC to move data, free of headers, directly to or from a destination or source in the host. The context is stored in the INIC as a communication control block (CCB) that can be passed back to the host for message processing by the host. The INIC contains specialized hardware circuits that are much faster at their specific tasks than a general purpose CPU. A preferred embodiment includes a trio of pipelined processors with separate processors devoted to transmit, receive and management processing, with full duplex communication for four fast Ethernet nodes.
摘要:
A network interface device has a fast-path ACK generating and transmitting mechanism. ACKs are generated using a finite state machine (FSM). The FSM retrieves a template header and fills in TCP and IP fields in the template. The FSM is not a stack, but rather fills in the TCP and IP fields without performing transport layer processing and network layer processing sequentially as separate tasks. The filled-in template is placed into a buffer and a pointer to the buffer is pushed onto a high-priority transmit queue. Pointers for ordinary data packets are pushed onto a low-priority transmit queue. A transmit sequencer outputs a packet by popping a transmit queue, obtaining a pointer, and causing information pointed to by the pointer to be output from the network interface device as a packet. The sequencer pops the high-priority queue in preference to the low-priority queue, thereby accelerating ACK generation and transmission.
摘要:
A network interface device provides a fast-path that avoids most host TCP and IP protocol processing for most messages. The host retains a fallback slow-path processing capability. In one embodiment, generation of a response to a TCP/IP packet received onto the network interface device is accelerated by determining the TCP and IP source and destination information from the incoming packet, retrieving an appropriate template header, using a finite state machine to fill in the TCP and IP fields in the template header without sequential TCP and IP protocol processing, combining the filled-in template header with a data payload to form a packet, and then outputting the packet from the network interface device by pushing a pointer to the packet onto a transmit queue. A transmit sequencer retrieves the pointer from the transmit queue and causes the corresponding packet to be output from the network interface device.