摘要:
A system and computer readable medium for oversubscribing bandwidth in a communication network, is disclosed. The system and computer readable medium includes policing a first data flow and outputting a first output data flow from the first meter, in relation to a first Committed Information Rate (CIR) and a first Peak Information Rate (PIR); policing a second data flow and outputting a second output data flow from the second meter in relation to a second CIR and a second PIR; and policing an aggregated output data flow of the first output data flow and the second output data through a third meter of the oversubscription module, where the aggregated output data flow is policed in relation to a third CIR and a third PIR.
摘要:
A system and computer readable medium for oversubscribing bandwidth in a communication network, is disclosed. The system and computer readable medium includes policing a first data flow and outputting a first output data flow from the first meter, in relation to a first Committed Information Rate (CIR) and a first Peak Information Rate (PIR); policing a second data flow and outputting a second output data flow from the second meter in relation to a second CIR and a second PIR; and policing an aggregated output data flow of the first output data flow and the second output data through a third meter of the oversubscription module, where the aggregated output data flow is policed in relation to a third CIR and a third PIR.
摘要:
A method for oversubscribing bandwidth in a communication network, is disclosed. The method includes policing a first data flow and outputting a first output data flow from the first meter, in relation to a first Committed Information Rate (CIR) and a first Peak Information Rate (PIR); policing a second data flow and outputting a second output data flow from the second meter in relation to a second CIR and a second PIR; and policing an aggregated output data flow of the first output data flow and the second output data through a third meter of the oversubscription module, where the aggregated output data flow is policed in relation to a third CIR and a third PIR.
摘要:
A mechanism is provided for sharing a communication used by a parser (parser path) in a network adapter of a network processor for sending requests for a process to be executed by an external coprocessor. The parser path is shared by processors of the network processor (software path) to send requests to the external processor. The mechanism uses for the software path a request mailbox comprising a control address and a data field accessed by MMIO for sending two types of messages, one message type to read or write resources and one message type to trigger an external process in the coprocessor and a response mailbox for receiving response from the external coprocessor comprising a data field and a flag field. The other processors of the network poll the flag until set and get the coprocessor result in the data field.
摘要:
A mechanism is provided for sharing a communication used by a parser (parser path) in a network adapter of a network processor for sending requests for a process to be executed by an external coprocessor. The parser path is shared by processors of the network processor (software path) to send requests to the external processor. The mechanism uses for the software path a request mailbox comprising a control address and a data field accessed by MMIO for sending two types of messages, one message type to read or write resources and one message type to trigger an external process in the coprocessor and a response mailbox for receiving response from the external coprocessor comprising a data field and a flag field. The other processors of the network poll the flag until set and get the coprocessor result in the data field.
摘要:
A mechanism is provided for merging in a network processor results from a parser and results from an external coprocessor providing processing support requested by said parser. The mechanism enqueues in a result queue both parser results needing to be merged with a coprocessor result and parser results which have no need to be merged with a coprocessor result. An additional queue is used to enqueue the addresses of the result queue where the parser results are stored. The result from the coprocessor is received in a simple response register. The coprocessor result is read by the result queue management logic from the response register and merged to the corresponding incomplete parser result read in the result queue at the address enqueued in the additional queue.
摘要:
A host Ethernet adapter (HEA) and method of managing network communications is provided. The HEA includes a host interface configured for communication with a multi-core processor over a processor bus. The host interface comprises a receive processing element including a receive processor, a receive buffer and a scheduler for dispatching packets from the receive buffer to the receive processor; a send processing element including a send processor and a send buffer; and a completion queue scheduler (CQS) for dispatching completion queue elements (CQE) from the head of the completion queue (CQ) to threads of the multi-core processor in a network node mode. The method comprises operatively coupling an Ethernet adapter to a multi-core processor system via a processor bus, selectively assigning a first plurality of packets to a first queue pair for servicing in an endpoint mode, running a device driver on the multi-core processing system, the device driver controlling the servicing of the first queue pair by dispatching the first plurality of packets to only one processor core of the multi-core processor system, selectively assigning a second plurality of packets to a second queue pair for servicing in a network node mode; and the Ethernet adapter controlling the servicing of the second queue pair by dispatching the second plurality of packets to multiple processor threads.
摘要:
Mechanisms for processing of communications between data processing devices are provided. With the mechanisms of the illustrative embodiments, a set of techniques that enables sustaining media speed by distributing transmit and receive-side processing over multiple processing cores is provided. In addition, these techniques also enable designing multi-threaded network interface controller (NIC) hardware that efficiently hides the latency of direct memory access (DMA) operations associated with data packet transfers over an input/output (I/O) bus. Multiple processing cores may operate concurrently using separate instances of a communication protocol stack and device drivers to process data packets for transmission with separate hardware implemented send queue managers in a network adapter processing these data packets for transmission. Multiple hardware receive packet processors in the network adapter may be used, along with a flow classification engine, to route received data packets to appropriate receive queues and processing cores for processing.
摘要:
A communication network used to link information handling systems together utilizes a switching network to transmit data among senders and receivers. Each individual packet of data is described and controlled by an FCB. The bandwidth associated with the storing and distribution of data is optimized by chaining the data packets in different types of queues, or operating without chaining outside a queue. When a frame is in an output queue, the third word contains an RFCBA for egress of the frame to a line port, and an MCID for ingress from an output queue to a switch port. The RFCBA and the MCID have multicast capabilities. The format does not require a third word when a frame is in an input queue.
摘要:
A system and method for multicore processing of communications between data processing devices are provided. With the mechanisms of the illustrative embodiments, a set of techniques that enables sustaining media speed by distributing transmit and receive-side processing over multiple processing cores is provided. In addition, these techniques also enable designing multi-threaded network interface controller (NIC) hardware that efficiently hides the latency of direct memory access (DMA) operations associated with data packet transfers over an input/output (I/O) bus. Multiple processing cores may operate concurrently using separate instances of a communication protocol stack and device drivers to process data packets for transmission with separate hardware implemented send queue managers in a network adapter processing these data packets for transmission. Multiple hardware receive packet processors in the network adapter may be used, along with a flow classification engine, to route received data packets to appropriate receive queues and processing cores for processing.