Abstract:
A novel method is provided for initializing a data processing system having registers programmable with configuration data read from a non-volatile memory at power-up. The method includes segmenting the non-volatile memory into a first portion for storing first data, and a second portion for storing second data having lower priority than the first data. The first portion is smaller than the second portion. The first data are read from the first portion to program a first group of registers. Thereafter, the second data are read from the second portion to program a second group of registers. As a result, a host is enabled to access the first group of registers, while the second data are being read from the second memory portion.
Abstract:
A network switch in a packet switched network includes a plurality of network switch ports, each configured for sending and receiving data packets between a medium interface and the network switch. The network switch port includes an IEEE 802.3 compliant transmit state machine and receive state machine configured for transmitting and receiving network data to and from a medium interface, such as a reduced medium independent interface, respectively. The network switch port also includes a memory management unit configured for selectively transferring the network data between the transmit and receive state machines and a random access transmit buffer and a random access receive buffer, respectively. The transmit state machine outputs a flush transmit buffer signal to the transmit memory management unit in response to a detected error in transmitting the transmit data. The transmit memory management unit, in response to the flush transmit buffer signal, sets an incremented transmit buffer pointer value to a buffer pointer value corresponding to a next transmit data stored in the transmit buffer.
Abstract:
A method and apparatus are disclosed for reclaiming frame buffers used to store data frames received by a network switch. The apparatus includes a multicopy queue for queuing entries corresponding to received data frames which must be transmitted by multiple output ports of the network switch, a free buffer queue for queuing frame pointers that identify locations in an external memory where reclaimed frame buffers are located, and a multicopy circuit that retrieves entries from the multicopy queue and determines if all copies of a received data frame have been transmitted by the specified output ports. The multicopy circuit also reclaims one or more frame buffers, based on the size of the received data frame. The present invention allows efficient reclaiming of frame buffers regardless of whether the received data frame is stored in a single frame buffer or multiple frame buffers.
Abstract:
A method and apparatus for interfacing a central processing unit to a network switch with an external memory that transfers data to the network switch at a different clock speed than transfers of data to the central processing unit provides an interlocking mechanism to prevent overwriting of data and underflows from occurring. The interlocking of the state machines, accomplished by the idling and advancing of a processor state machine and an external memory state machine, prevents either one of the separate state machines from outrunning the other state machine.
Abstract:
A novel method of power management is provided in a computer system having a network interface module including a buffer memory and a MAC block. The method includes determining whether the system is inactive during a predetermined time period. If so, activity of the MAC block is checked. If the MAC block is idle, the status of the buffer memory is determined. The system is placed into a power-down mode if the buffer memory is empty.
Abstract:
The present invention keeps track of available elements in a list of elements available to a given device for processing from an index and count mechanism. Such an index and count mechanism provides an index that indicates a starting element in the list of elements that is available to the given device for processing. Such an index and count mechanism also provides a count that indicates a subsequent number of elements, from the starting element in the list of elements, that are available to the given device for processing. A first index register and a second index register alternately keep track of a last available element in the list of elements available to be processed by the given device until the last available element is a very last element in the list of elements. In addition, the first index register and the second index register also alternately keep track of a current element, in the list of elements, that is currently being processed by the given device until the very last element in the list of elements has been processed by the given device. By thus alternating between the first and second index registers for keeping track of the last available element and by thus alternating between the first and second index registers for keeping track of the currently processed element, processing through multiple cycles of the list of elements may be kept track of in a simple manner. The present invention may be used to particular advantage when the given device is a computer network peripheral device that processes descriptors within a shared memory of a host computer system.
Abstract:
Method of managing interaction between a host subsystem and a peripheral device. Roughly described, the peripheral device writes an event into an individual event queue, and in conjunction therewith, also writes a wakeup event into an intermediary event queue. The wakeup event identifies the individual event queue. The host subsystem, in response to retrieval of the wakeup event from the intermediary event queue, activates an individual event handler to consume events from the individual event queue.
Abstract:
Method of managing interaction between a host subsystem and a peripheral device. Roughly described, the peripheral device writes an event into an individual event queue, and in conjunction therewith, also writes a wakeup event into an intermediary event queue. The wakeup event identifies the individual event queue. The host subsystem, in response to retrieval of the wakeup event from the intermediary event queue, activates an individual event handler to consume events from the individual event queue.
Abstract:
System and method of a pace engine for governing the different transmission rates tailored for different connections by rate pacing a plurality of queues are described. Roughly described, the pace engine includes a binning controller for receiving queues from a transmit DMA queue manager and determines the earliest allowed time for a particular queue that is stored and paced in a Work Bin, a Fast Bin, or a Slow Bin. A pace table stores information about the minimum inter-packet-gap for each connection that is coupled to the transmit DMA queue manager. A timer is coupled to the binning controller with a multi-bit continuous counter that increments at a predetermined time unit and wraps around after a predetermined amount of time.
Abstract:
Method for managing a data transmit queue, for use with a host and a network interface device. Roughly described, the host writes data buffer descriptors into a transmit descriptor queue, and the network interface device writes events to notify the host when it has completed processing of a transmit data buffer. Each of the transmit completion event descriptors notify the host of completion of a plurality of the transmit data buffers.