摘要:
A mechanism for offloading the management of send queues in a split socket stack environment, including efficient split socket queue flow control and TCP/IP retransmission support. An Upper Layer Protocol (ULP) creates send work queue entries (SWQEs) for writing to the send work queue (SWQ). The Internet Protocol Suite Offload Engine (IPSOE) is notified of a new entry to the SWQ and it subsequently reads this entry that contains pointers to the data that is to be transmitted. After the data is transmitted and acknowledgments are received, the IPSOE creates a completion queue entry (CQE) that is written into the completion queue (CQ). The flow control between the ULP and the IPSOE is credit based. The passing of CQ credits is the only explicit mechanism required to manage flow control of both the SWQ and the CQ between the ULP and the IPSOE.
摘要:
A mechanism for offloading the management of send queues in a split socket stack environment, including efficient split socket queue flow control and TCP/IP retransmission support. As consumers initiate send operations, send work queue entries (SWQEs) are created by an Upper Layer Protocol (ULP) and written to the send work queue (SWQ). The Internet Protocol Suite Offload Engine (IPSOE) is notified of a new entry to the SWQ and it subsequently reads this entry that contains pointers to the data that is to be transmitted. After the data is transmitted and acknowledgments are received, the IPSOE creates a completion queue entry (CQE) that is written into the completion queue (CQ). After the CQE is written, the ULP subsequently processes the entry and removes it from the CQE, freeing up a space in both the SWQ and CQ. The number of entries available in the SWQ are monitored by the ULP so that it does not overwrite any valid entries. Likewise, the IPSOE monitors the number of entries available in the CQ, so as not overwrite the CQ. The flow control between the ULP and the IPSOE is credit based. The passing of CQ credits is the only explicit mechanism required to manage flow control of both the SWQ and the CQ between the ULP and the IPSOE.
摘要:
A mechanism for offloading the management of receive queues in a split (e.g. split socket, split iSCSI, split DAFS) stack environment, including efficient queue flow control and TCP/IP retransmission support. An Upper Layer Protocol (ULP) creates receive work queues and completion queues that are utilized by an Internet Protocol Suite Offload Engine (IPSOE) and the ULP to transfer information and carry out send operations. As consumers initiate receive operations, receive work queue entries (RWQEs) are created by the ULP and written to the receive work queue (RWQ). The ISPOE is notified of a new entry to the RWQ and it subsequently reads this entry that contains pointers to the data that is to be received. After the data is received, the IPSOE creates a completion queue entry (CQE) that is written into the completion queue (CQ). After the CQE is written, the ULP subsequently processes the entry and removes it from the CQE, freeing up a space in both the RWQ and CQ. The number of entries available in the RWQ are monitored by the ULP so that it does not overwrite any valid entries. Likewise, the IPSOE monitors the number of entries available in the CQ, so as not overwrite the CQ.
摘要:
Mechanisms are provided for a network processor comprising a parser, the parser being operable to work in normal operation mode or in repeat operation mode, the parser in normal operation mode loading and executing at least one rule in a first and a second working cycle respectively, the parser in repeat operation mode being operable to repeatedly execute a repeat-instruction, the execution of each repeat corresponding to one working cycle.
摘要:
A pipeline configuration is described for use in network traffic management for the hardware scheduling of events arranged in a hierarchical linkage. The configuration reduces costs by minimizing the use of external SRAM memory devices. This results in some external memory devices being shared by different types of control blocks, such as flow queue control blocks, frame control blocks and hierarchy control blocks. Both SRAM and DRAM memory devices are used, depending on the content of the control block (Read-Modify-Write or ‘read’ only) at enqueue and dequeue, or Read-Modify-Write solely at dequeue. The scheduler utilizes time-based calendars and weighted fair queueing calendars in the egress calendar design. Control blocks that are accessed infrequently are stored in DRAM memory while those accessed frequently are stored in SRAM.
摘要:
A pipeline configuration is described for use in network traffic management for the hardware scheduling of events arranged in a hierarchical linkage. The configuration reduces costs by minimizing the use of external SRAM memory devices. This results in some external memory devices being shared by different types of control blocks, such as flow queue control blocks, frame control blocks and hierarchy control blocks. Both SRAM and DRAM memory devices are used, depending on the content of the control block (Read-Modify-Write or ‘read’ only) at enqueue and dequeue, or Read-Modify-Write solely at dequeue. The scheduler utilizes time-based calendars and weighted fair queueing calendars in the egress calendar design. Control blocks that are accessed infrequently are stored in DRAM memory while those accessed frequently are stored in SRAM.
摘要:
A method and structure is provided for buffering data packets having a header and a remainder in a network processor system. The network processor system has a processor on a chip and at least one buffer on the chip. Each buffer on the chip is configured to buffer the header of the packets in a preselected order before execution in the processor, and the remainder of the packet is stored in an external buffer apart from the chip. The method comprises utilizing the header information to identify the location and extent of the remainder of the packet. The entire selected packet is stored in the external buffer when the buffer of the stored header of the given packet is full, and moving only the header of a selected packet stored in the external buffer to the buffer on the chip when the buffer on the chip has space therefor.
摘要:
A mechanism is provided for sharing a communication used by a parser (parser path) in a network adapter of a network processor for sending requests for a process to be executed by an external coprocessor. The parser path is shared by processors of the network processor (software path) to send requests to the external processor. The mechanism uses for the software path a request mailbox comprising a control address and a data field accessed by MMIO for sending two types of messages, one message type to read or write resources and one message type to trigger an external process in the coprocessor and a response mailbox for receiving response from the external coprocessor comprising a data field and a flag field. The other processors of the network poll the flag until set and get the coprocessor result in the data field.
摘要:
Assigning work, such as data packets, from a plurality of sources, such as data queues in a network processing device, to a plurality of sinks, such as processor threads in the network processing device is provided. In a given processing period, sinks that are available to receive work are identified and sources qualified to send work to the available sinks are determined taking into account any assignment constraints. A single source is selected from an overlap of the qualified sources and sources having work available. This selection may be made using a hierarchical source scheduler for processing subsets of supported sources simultaneously in parallel. A sink to which work from the selected source may be assigned is selected from available sinks qualified to receive work from the selected source.
摘要:
Mechanisms are provided for a network processor comprising a parser, the parser being operable to work in normal operation mode or in repeat operation mode, the parser in normal operation mode loading and executing at least one rule in a first and a second working cycle respectively, the parser in repeat operation mode being operable to repeatedly execute a repeat-instruction, the execution of each repeat corresponding to one working cycle.