Abstract:
The present invention keeps track of available elements in a list of elements available to a given device for processing from an index and count mechanism. Such an index and count mechanism provides an index that indicates a starting element in the list of elements that is available to the given device for processing. Such an index and count mechanism also provides a count that indicates a subsequent number of elements, from the starting element in the list of elements, that are available to the given device for processing. A first index register and a second index register alternately keep track of a last available element in the list of elements available to be processed by the given device until the last available element is a very last element in the list of elements. In addition, the first index register and the second index register also alternately keep track of a current element, in the list of elements, that is currently being processed by the given device until the very last element in the list of elements has been processed by the given device. By thus alternating between the first and second index registers for keeping track of the last available element and by thus alternating between the first and second index registers for keeping track of the currently processed element, processing through multiple cycles of the list of elements may be kept track of in a simple manner. The present invention may be used to particular advantage when the given device is a computer network peripheral device that processes descriptors within a shared memory of a host computer system.
Abstract:
Method of managing interaction between a host subsystem and a peripheral device. Roughly described, the peripheral device writes an event into an individual event queue, and in conjunction therewith, also writes a wakeup event into an intermediary event queue. The wakeup event identifies the individual event queue. The host subsystem, in response to retrieval of the wakeup event from the intermediary event queue, activates an individual event handler to consume events from the individual event queue.
Abstract:
Method of managing interaction between a host subsystem and a peripheral device. Roughly described, the peripheral device writes an event into an individual event queue, and in conjunction therewith, also writes a wakeup event into an intermediary event queue. The wakeup event identifies the individual event queue. The host subsystem, in response to retrieval of the wakeup event from the intermediary event queue, activates an individual event handler to consume events from the individual event queue.
Abstract:
System and method of a pace engine for governing the different transmission rates tailored for different connections by rate pacing a plurality of queues are described. Roughly described, the pace engine includes a binning controller for receiving queues from a transmit DMA queue manager and determines the earliest allowed time for a particular queue that is stored and paced in a Work Bin, a Fast Bin, or a Slow Bin. A pace table stores information about the minimum inter-packet-gap for each connection that is coupled to the transmit DMA queue manager. A timer is coupled to the binning controller with a multi-bit continuous counter that increments at a predetermined time unit and wraps around after a predetermined amount of time.
Abstract:
Method for managing a data transmit queue, for use with a host and a network interface device. Roughly described, the host writes data buffer descriptors into a transmit descriptor queue, and the network interface device writes events to notify the host when it has completed processing of a transmit data buffer. Each of the transmit completion event descriptors notify the host of completion of a plurality of the transmit data buffers.
Abstract:
Method and apparatus for retrieving buffer descriptors from a host memory for use by a peripheral device. In an embodiment, a peripheral device such as a NIC includes a plurality of buffer descriptor caches each corresponding to a respective one of a plurality of host memory descriptor queues, and a plurality of queue descriptors each corresponding to a respective one of the host memory descriptor queues. Each of the queue descriptors includes a host memory read address pointer for the corresponding descriptor queue, and this same read pointer is used to derive algorithmically the descriptor cache write addresses at which to write buffer descriptors retrieved from the corresponding host memory descriptor queue.
Abstract:
A switching system includes a multiport module having an address table for storing network addresses, and a host processor configured for selectively swapping the stored network addresses in the address table to an internal memory that serves as an overflow address table for the multiport switch module. The address table internal to the multiport module is configured for storing a prescribed number of network addresses for high-speed access, for example the most frequently-used network addresses. The host processor, configured for controlling the storage of network addresses between the address table and the external memory, uses the external memory as the overflow address table for storage of less frequently-used network addresses, for example addresses of network devices that transmit little more than periodic “keep-alive” frames. Hence, a large number of addresses may be managed by the switching system, without the necessity of an unusually large on-chip address table.
Abstract:
A network switch configured for switching data packets across multiple ports and for supporting trunked data paths uses an address table to generate frame forwarding information. When a link in a trunked data path experiences a change in its operating status, the trunk data path is reconfigured to reflect the current operating conditions, without reprogramming the address table or powering down the switch.
Abstract:
An output driver that may be configured to operate as a totem-pole driver, or as an open-drain driver. The output driver comprises a totem-pole driver coupled to an output pin. A control circuit is coupled to the output enable input of the totem-pole driver. The control circuit is supplied with an open-drain control signal controlled by the user interface. When the open-drain control signal is at a first logic level, the output driver operates as an open-drain driver. When the open-drain control signal is at a second logic level, the output driver is configured to operate as a totem-pole driver.
Abstract:
A system for tracing read and write accesses to selected registers is provided in a network interface. The system has a read trace register containing a separate bit for each register to be monitored for read access by an external CPU. A write trace register is provided with a bit for each register to be monitored for write access by the CPU. When the CPU performs read or write access to a monitored register, a decoder that decodes an address signal from the CPU produces a trace select signal supplied to the read trace register and/or write trace register. In response to the trace select signal, the bit representing the monitored register is set to a predetermined logic state indicating that the monitored register was accessed by the CPU for reading and/or writing.