摘要:
A method is provided for communicating data in an interconnect system comprising a plurality of nodes. In one aspect, the method includes: issuing a command packet from a first node, the command packet comprising a respective header quadword and at least one respective data quadword for conveying a command to a second node, wherein the command is selected from a group comprising a direct memory access (DMA) command, an administrative write command, a memory copy write command, and a built in self test (BIST) command; receiving the command packet at the second node; issuing an acknowledgement packet from the second node, the acknowledgement packet comprising a respective header quadword for conveying an acknowledgement that the command packet has been received at the second node.
摘要:
A method is provided for a data storage system to move data from a source logical disk (LD) region to a target LD region while the data storage system remains online to a host. The method includes determining if a region move will create excessive load so the data storage system appears offline to the host. If not, the method includes causing writes to the source LD region to be mirrored to the target LD region, causing data in the source LD region to be copied to the target LD region, blocking reads and writes to the data storage system, and flushing dirty cache in the data storage system. If flushing the dirty cache is fast so the data storage system appears online to the host, the method includes updating mappings of the virtual volume to the LD regions and resuming the reads and writes to the data storage system.
摘要:
A probabilistic queue lock divides requesters for a lock into at least three sets. In one embodiment, the requesters are divided into the owner of the lock, the first waiting contender, and the other waiting contenders. The first waiting contender is made probabilistically more likely to obtain the lock by having it spin faster than the other waiting contenders. Because the other waiting contenders spin more slowly, the first waiting contender is more likely to be able to observe the free lock and acquire it before the other waiting contenders notice that it is free. The first of the other waiting contenders that determines that the previous first waiting contender has acquired the lock is promoted to be the new first waiting contender and begins spinning fast. Because only the first waiting contender is spinning fast on the lock, it is probable that only the first waiting contender will attempt to acquire the lock when it becomes available.
摘要:
A system and method for storing error correction check words in computer memory modules. Check bits stored in physically adjacent locations within a dynamic random access memory (DRAM) chip are assigned to different check words. By assigning check bits to check words in this manner, multi-bit soft errors resulting from errors in two or more check bits stored in physically adjacent memory locations will appear as single-bit errors to an error correction subsystem. Similarly, the likelihood of multi-bit errors occurring in the same check word may be reduced.
摘要:
Low-latency distributed round-robin arbitration is used to grant requests for access to a shared resource such as a computer system bus. A plurality of circuit board cards that each include two devices such as CPUs, I/O units, and ram and an address controller plugs into an Address Bus in the bus system. Each address controller contains logic implementing the arbitration mechanism with a two-level hierarchy: a single top arbitrator and preferably four leaf arbitrators. Each address controller is coupled to two devices and the logical "OR" of their arbitration request is coupled via an Arbitration Bus to other address controllers on other boards. Each leaf arbitrator has four prioritized request in lines, each such line being coupled to a single address controller serviced by that leaf arbitrator. By default, each leaf arbitrator and the top arbitrator implement a prioritized algorithm. However a last winner ("LW") state is maintained at every arbitrator that overrides the default, to provide round-robin selection. Each leaf arbitrator arbitrates among the zero to four requests it sees, selects a winner and signals the top arbitrator that it has a device wishing access. At the top arbitrator, if the first leaf arbitrator last won a grant, it now has lowest grant priority, and a grant will go to the next highest leaf arbitrator having a device seeking access.
摘要:
In a multiprocessor system having a shared memory, each central processor services copyback requests from other central processors. Each central processor has a writeback buffer along with a plurality of tag buffers and an associated snoop architecture for processing writeback and copyback commands. Each central processor includes a cache subsystem having a system interface, a main cache and an associated tag array. The system interface has an address controller and data controller, each having separate input and output queues for interfacing between the central processor and system control and data buses. The address controller includes a set of duplicate tags that mirror the tags associated with the main cache, and an auxiliary tag input buffer and auxiliary tag output buffer. The address controller has for each line in the output queue an associated pointer that indicates the location in the data controller where data is stored that is associated with output queued commands. In operation, the address controller processes inbound multiple copyback requests without requiring the central processor to access data from its associated main cache. The address controller utilizes the output queues in the address and data controller as well as the auxiliary tag buffers to store copyback data and tag information.
摘要:
A modified toe-to-heel waterflooding (TTHW) process is provided for recovering oil from a reservoir in an underground formation. After establishing the conventional TTHW waterflood, the process includes placing a chemical blocking agent at the watered out producing toe portion of the horizontal leg of the production well to create a blockage in the producing toe portion and to create a new producing toe portion in an open portion of the horizontal leg adjacent the blockage through which most of the production takes place. Production is then continued through the new producing toe portion and the open portion of the horizontal leg of the production well. These blocking and producing steps can be continued to progressively block producing toe portions in a direction toward the vertical pilot portion of the production well.
摘要:
A system may include a plurality of nodes coupled by an inter-node network. Each of the nodes includes several active devices, an interface to the inter-node network, and an address network coupling the active devices to the interface. An active device included in one of the nodes initiates a transaction by sending either a first type of address packet or a second type of address packet on the address network dependent on whether the active device is included in a multi-node system. The first type of address packet is sent if the active device is included in a multi-node system and is not snooped by other active devices in the same node as the active device. The second type of address packet, sent if the active device is included in a single node system, is snooped by other active devices in the same node as the active device.
摘要:
A data storage system includes a plurality of nodes for providing access to a data storage facility. Each node has a computer-memory complex to provide general purpose computing for the node, a node controller to control data transfers through the respective node, and a cluster memory to buffer data for the data transfers. A plurality of communication paths interconnect the nodes, with a separate communication path provided for each two nodes of the data storage system.
摘要:
A pending tag system and method to maintain data coherence in a processing node during pending transactions in a transaction pipeline. A pending tag storage unit may be coupled to a cache controller and configured to store pending tags each indicative of a coherence state for a data line corresponding to a pending transaction within the transaction pipeline. The pending tag storage unit includes a total amount of storage which is substantially less than an amount required to store tags contained in the full tag array for the cache memory. When a pending tag exists in the pending tag storage unit, the coherence state of the corresponding data line within the cache memory is dictated by the pending tag for snoop operations. Accordingly, data coherence is maintained during the period when transactions are pending, e.g., not yet presented to a processor and cache. When a pending transaction is completed, the coherence state of the corresponding data line as indicated by the fill tag array may be overwritten by the coherence state as indicated by the pending tag and the pending tag may deleted from the pending tag storage.