摘要:
File system independent techniques and mechanisms for replicating files on multiple devices are provided, migrating files from one device to another (for purposes of reliability, increased bandwidth, load balancing, capacity expansion, or reduced cost), and propagating updates from a master copy to remote replicas. The mechanisms involve work queues and asynchronous file migration daemons that operate independently from and in parallel with the primary client-server and network protocol to on-disk storage data paths.
摘要:
A storage system includes a plurality of storage servers that store a plurality of files, a monitor module, and a redirector module. The monitor module monitors usage information associated with the plurality of storage servers. The redirector module determines, based on the monitored usage information, a storage server in the plurality of storage servers to service a session from a client. The redirector module then instructs the client to establish the session with the determined storage server.
摘要:
A method and apparatus of reporting the status of data transfer between software and hardware in a computer system is disclosed. Software provides empty descriptors to the hardware for posting completion updates of transfers. More particularly, the software provides the number of the last available descriptor to a first storage field in a storage location which is accessible to the hardware. The hardware accounts for the number of the descriptors it has used for reporting completion updates by posting the number of used descriptors to a second storage field in the storage location. To determine if more descriptors are available, the hardware compares the contents of the first storage field to that of the second storage field. If the contents of the first and second storage fields are equal, the hardware has reached the last descriptor in the completion ring. If the fields are not equal, one or more descriptors are available for the hardware to use.
摘要:
A network interface card (NIC) is provided with a transmit unload block for asynchronously segmenting packet data into ATM cells for packets of multiple channels. The transmit unload block comprises a cellification block and a cellification and DMA scheduler. The cellification block is used to perform the actual cellification of the packet data into ATM cells, one ATM cell at a time, and management of the packet control data associated with the ATM cell's packet as well as management of the buffer control data associated with the ATM cell's channel. The construction of the current ATM cell is overlapped with the management of the packet and buffer control data associated with the immediately preceding ATM cell. The cellification and DMA scheduler is used to control the operation of the cellification block. The cellification and DMA scheduler is also used to schedule DMAs to obtain additional packet data for the channels.
摘要:
A method and network interface for controlling the flow of data between a and an ATM network is provided. The network interface resides on an ATM interface and includes a state machine for each channel supported by the ATM interface. The state machine moves from state to state based on the contents of a local buffer, indications that data for the channel is ready to be transferred from the host computer to the local buffer, and the status of operations that transfer data for the channel from the host computer to the local buffer. The ATM interface includes a DMA unit and a segmentation unit that operate responsive to the states of the various state machines to avoid inefficient transfer operations. Specifically, the DMA unit does not attempt to move data for a channel from the host computer to the local buffer if the state of the state machine for the channel indicates that (1) there is no more data in the host computer for the channel or (2) there is no more room in the local buffer to receive data for the channel from the host computer. The segmentation unit does not attempt to transmit cells for a channel when the state of the state machine for the channel indicates that the local buffer does not contain enough data for the channel to construct a cell for the channel.
摘要:
A number of storage units and a number of state machines are provided to reorder interleaved ATM data cells for a number of channels incoming to a networked host computer. The storage units store the incoming ATM data cells, a number of data structures tracking the stored ATM data cells for the channels and the free resources, and an unload schedule queue. The state machines load and unload the incoming ATM data cells, and update the tracking data structures and schedule queue accordingly.
摘要:
A method and an apparatus for reducing data copying overhead associated with protected memory operating systems. In an ATM (Asynchronous Transfer Method) network, the present invention's NIC (network interface circuit) demultiplexes the information in the header of the incoming packet and routes the packet directly to its final destination using the present invention's concept of targeted buffer rings. Thus, instead of having the packet be DMA'd to a buffer in a descriptor ring in the kernel, it may be routed directly to the buffer ring of the destination process.
摘要:
A network interface circuit (NIC) is provided with logic for maintaining various control pointers and at least one control counter for controlling burst transferring of buffered ATM cells to its host computer system in a non-cellboundary-aligned block manner, distinguishing the ATM packet header from the ATM data most of the time, except for a number of predetermined exceptions. More specifically, ATM packet headers and ATM data are to be burst transferred to separate header and data buffers on the host computer system, except for short and atypical packets, in fixed size blocks, where the block size is complementary to the interface bus, but not necessarily aligned with the ATM cell boundaries. For the short and atypical packets, both the header and data are to be burst transferred into the header buffer instead. The logic employs a two phase approach to determining the appropriate updates to the relevant control pointers and at least one control counter after each burst transfer of header/data to the header/data buffer. In one embodiment, the logic is provided to the lookahead state machine of an unload block, which is part of the receive block of a system and ATM layer core of the NIC.
摘要:
An on-chip cache memory is used to provide a high speed access mechanism to frequently used channel state information for operation of a DMA device that supports multiple virtual channels in a high speed network interface. When an access to a particular channel state is performed, e.g., by a host processor or the DMA device, the cache is first accessed and if the state information is not located currently in the cache, external memory is read and the state information is written to the cache. As the cache does not store all the states stored in external memory, replacement algorithms are utilize to determine which channel state information to remove from the cache in order to provide room to store a recently accessed channel. A doubly linked list is used to track the most recently used channel. As cached channel information is accessed, the corresponding entry is moved to the top of the list. The doubly linked list provides a rapid apparatus and method for updating pointers to the cache. Top and bottom pointers are maintained, pointing to the most recently used and least recently used channels. When a channel is used, it moved to the top of the list. When channel data is moved from external memory to the cache, the bottom pointer points to the channel data to be removed from the cache.
摘要:
A generic Input/Output interface between an IO block and a System and ATM Layer Core on a network interface circuit is provided. The GIO interface includes parallel DMA read and write control handshake signal lines; parallel DMA read and write data handshake signal lines which operate independently from the read and write control handshake signal lines; parallel DMA read and write data signal lines; and a single clock signal line. GIO interface facilitates maximum utilization of the IO bandwidth, and allows several requests to be queued across the GIO interface at the same time, in each read and write direction. In addition, the GIO interface utilizes a fixed clock for driving the transmit and receive data path. By thus referencing all transactions to a clock driving the Core, the Core remains unchanged for different embodiments of the network interface circuit which interface to different host computer systems and busses.