摘要:
A communication processor sends and receives frames of data and commands. Transmit and receive protocol engine is controlled by host driver software which utilizes predetermined bits to indicate which frame is the last frame in a series of frames. This information is then placed in the transmit frame before it is sent.
摘要:
A method of validation and host buffer allocation for unmapped fiber channel frames. More particularly, the invention encompasses a method of validating unmapped frames, each including a header and a payload, including receiving a frame as a current frame; determining if the current frame is a first frame in a sequence, and if so, saving the header and payload of the current frame in a buffer, and otherwise determining if the current frame is a next expected frame in the sequence; if the current frame is the next expected frame in the sequence, then saving the payload of the current frame in the buffer after the payload of the prior frame; determining if the current frame is a last frame in the sequence, and if so, sending a message to a host indicating receipt of the complete sequence; if the current frame is not the next expected frame in the sequence, then saving the header and payload of the current frame in the buffer, and sending a message to the host indicating receipt of a partial sequence. The host CPU is interrupted when either a complete sequence is received, or a partial sequence is received, followed by a frame from a different sequence. The host CPU may then process the concatenated payload of the sequence. The invention is particularly useful for processing TCP/IP frames in a Fiber Channel network.
摘要:
Storing frames of data in frame buffers sized to match the frame size when the frame size is not a power-of-two number of bytes is disclosed. The buffer size is chosen to be the largest power-of-two that is less than the frame size. When a frame of data is to be stored, the buffer number of a free buffer is effectively multiplied by the buffer size to obtain a partial frame buffer address Q. The buffer size subtracted from the frame size is referred to as a residual buffer size, and the buffer number is effectively multiplied by the residual buffer size to obtain a residual frame buffer address R. The full frame buffer starting address S=Q+R. For implementations where the difference between the frame size and the buffer size is a power-of-two value, binary shifts and addition can be used instead of a multiplier.
摘要:
A DMA (Direct Memory Access) Exchange Block (DXB) processor may include a receive processor for writing data from a local memory to a host memory over a bus, e.g., a Peripheral Component Interconnect Extended (PCI/X) bus, and a transmit processor for writing data retrieved from the host memory over the bus to the local memory. Each processor may include a high priority queue and a normal priority queue. A controlling program generates DXBs, each of which include a tag assigned by the controlling program and memory descriptors corresponding to a direct memory access operation. The memory descriptor may include a host memory descriptor (address/length) and one or more local memory descriptors. The controlling program writes a DXB to one of the queues in a cache line spill operation. The transfer processor may include two channel registers, enabling the processor to perform two PCI/X data transfers simultaneously.
摘要:
A method and apparatus is disclosed for improving the MSI and MSI-X specifications by implementing an efficient delivery and clearing mechanism for interrupt conditions to increase performance between the driver and hardware/firmware interface while ensuring that no interrupts are lost in the process. In particular, an auto clear function is employed to eliminate the need for drivers in the host to send writes over the PCI-based bus to deassert and assert attention enable register bits and clear down attention register bits, and a fail safe mechanism is utilized to prevent lost interrupts.
摘要:
An interrupt notification block stored in host memory is disclosed that contains an image of the interrupt condition contents that may be stored in a host attention register in a host interface port. The interrupt notification block is written by the host interface port and pre-fixed into the port pointer array of a host at the time the host interface port updates the pointers stored in a port pointer array in host memory. The host may then read the interrupt notification block to determine how to process a response or an interrupt rather than having to read the host attention register in the host interface port across the host bus.
摘要:
Generalized queues and specialized registers associated with the generalized queues are disclosed for coordinating the passing of information between two tightly coupled processors. The capacity of the queues can be adjusted to match the current environment, with no limit on the size of the entry as agreed upon between the sending and receiving processors, and with no practical limit on the number of entries or restrictions on where the entries appear. In addition, the specialized registers allow for immediate notifications of queue and other conditions, selectivity in receiving and generating conditions, and the ability to combine data transfer and particular condition notifications in the same attention register.
摘要:
A method and system for allowing a host device (e.g., server) to perform programmed direct accesses to peripheral memory (e.g., flash) located on a peripheral device (e.g., HBA), without the assistance of a microprocessor located on the peripheral device. In a preferred embodiment, new host registers are implemented within controller circuitry of the peripheral device, the host registers being configured to be recognized by host software executed by host. The host device reads and writes to the host registers, which causes appropriate controller hardware to access the peripheral nonvolatile memory accordingly. By creating and implementing the new host registers, an enhanced controller is created that allows a host device to directly access peripheral memory, without peripheral processor assistance.
摘要:
The preferred embodiment of present invention is directed to an improved method and system for buffering incoming/unsolicited data received by a host computer that is connected to a network such as a storage area network. Specifically, in a host computer system in which the main memory of the host server maintains a I/O control block command ring, and which a connective port (e.g., a host bus adaptor) is operatively coupled to the main memory for handling I/O commands received by and transmitted from the host server, a host buffer queue (HBQ) is maintained for storing a series of buffer descriptors retrievable by the port for writing incoming/unsolicited data to specific address locations within the main memory. In an alternative embodiment of the present invention, multiple HBQs are maintained for storing buffer entries dedicated to different types and/or lengths of data, where each of the HBQ can be separately configured to contain a selection profile describing the specific type of data for which the HBQ is dedicated to service.
摘要:
A method and system for allowing flexible control of access to a shared memory by multiple requesters. In a preferred embodiment, the invention arbitrates access to flash memory on a HBA between multiple host channels and HBA microprocessors, and eliminates contention possibilities for the flash during write cycles by the allowing a grant to be locked for a period defined by the flash write protocol and timing.