摘要:
Disclosed are a storage controller, and a method of operating a storage controller, for interfacing between host systems and a storage devices system. The storage controller includes a first cluster including a first processor and a first cache, and a second cluster including a second processor and a second cache. The method comprises the step of directing data from the host systems through first and second data paths to the storage system. The first processor and cache are associated with the first data path, and the second processor and cache are associated with the second data path. Under one set of conditions, the controller enters a failover mode, wherein data directed to the first data path are routed to the second data path. Under another set of conditions, the controller deconfigures the first cache without entering the failover mode.
摘要:
A data storage control unit is coupled to one or more host devices and to one or more physical storage units, the storage control unit configured as a plurality of clusters. Each cluster includes cache memory and often non-volatile storage (NVS). The storage control unit receives and processes write requests from the host devices and directs that data updates be temporarily stored in the cache in one cluster and copied to the NVS of the other cluster. The data updates are subsequently destaged to the logical ranks associated with each cluster. During an initial microcode load (IML) of the storage controller, space in the cache and NVS of each cluster is allocated to buffers with the remaining cache and NVS space being allocated to customer data. After an IML has been completed, the size of the buffers become fixed and no further buffer allocation may occur. Method, apparatus and program product are provided by which a data storage controller dynamically reconfigures NVS and cache memory in multiple clusters, particularly when it is desired to change the size of the NVS and cache of either or both clusters.
摘要:
A memory may contain a large number of bytes of data perhaps as many as 256 megabytes in a typical large memory structure. An error correcting code algorithm may be used to identify failing memory modules in a memory system. In a particular embodiment, a number of spares may be provided on each memory card allowing a predetermined number of defective array modules to be replaced in a storage work. With double bit correction provided by the error correcting code logic, a number of bits can be corrected on a card or a larger number of bits can be corrected on a card pair, where the larger number of bits is somewhat less than double the number of bits which can be corrected on a single card. The address test in accordance with the present invention then produces a pattern that will create a difference greater than that larger number of bits between the data stored in a storage location under test and any address that could be accesseed by an address line failure. The method according to the present invention predicts the effect of an address line failure external to the array modules and internal to a card pair and then tests to see if a failure has occurred. The address test does not declare an address failure until a predetermined number of bit failures on a card is found. The test is valid for single and multiple address line failures. Since only one address bit is changed for each path through the test other failing address lines will not be detected until the path with those failing address bits are tested. Thus, even with multiple address line failure the two addresses that are stored to and fetched from are the only one address bit apart.
摘要:
A determination is made of a track to demote from the first cache to the second cache, wherein the track in the first cache corresponds to a track in the storage system and is comprised of a plurality of sectors. In response to determining that the second cache includes a the stale version of the track being demoted from the first cache, a determination is made as to whether the stale version of the track includes track sectors not included in the track being demoted from the first cache. The sectors from the track demoted from the first cache are combined with sectors from the stale version of the track not included in the track being demoted from the first cache into a new version of the track. The new version of the track is written to the second cache.
摘要:
Provided are a computer program product, system, and method for processing a read operation for a target block of data. A read operation for the target block of data in target storage is received, wherein the target block of data is in an instant virtual copy relationship with a source block of data in source storage. It is determined that the target block of data in the target storage is not consistent with the source block of data in the source storage. The source block of data is retrieved. The data in the source block of data in the cache is synthesized to make the data appear to be retrieved from the target storage. The target block of data is marked as read from the source storage. In response to the read operation completing, the target block of data that was read from the source storage is demoted.
摘要:
Provided are a computer program product, system, and method for managing track discard requests to include in discard track messages. A backup copy of a track in a cache is maintained in the cache backup device. A track discard request is generated to discard tracks in the cache backup device removed from the cache. Track discard requests are queued in a discard track queue. In response to detecting that a predetermined number of track discard requests are queued in the discard track queue while processing in a discard multi-track mode, one discard multiple tracks message is sent indicating the tracks indicated in the queued predetermined number of track discard requests to the cache backup device instructing the cache backup device to discard the tracks indicated in the discard multiple tracks message. In response to determining a predetermined number of periods of inactivity while processing in the discard multi-track mode, processing the track discard requests is switched to a discard single track mode.
摘要:
Provided are a computer program product, system, and method for managing unmodified tracks maintained in both a first cache and a second cache. The first cache has unmodified tracks in the storage subject to Input/Output (I/O) requests. Unmodified tracks are demoted from the first cache to a second cache. An inclusive list indicates unmodified tracks maintained in both the first cache and a second cache. An exclusive list indicates unmodified tracks maintained in the second cache but not the first cache. The inclusive list and the exclusive list are used to determine whether to promote to the second cache an unmodified track demoted from the first cache.
摘要:
Provided are a computer program product, system, and method for managing track discard requests to include in discard track messages. A backup copy of a track in a cache is maintained in the cache backup device. A track discard request is generated to discard tracks in the cache backup device removed from the cache. Track discard requests are queued in a discard track queue. In response to detecting that a predetermined number of track discard requests are queued in the discard track queue while processing in a discard multi-track mode, one discard multiple tracks message is sent indicating the tracks indicated in the queued predetermined number of track discard requests to the cache backup device instructing the cache backup device to discard the tracks indicated in the discard multiple tracks message. In response to determining a predetermined number of periods of inactivity while processing in the discard multi-track mode, processing the track discard requests is switched to a discard single track mode.
摘要:
A technique for limiting an amount of write data stored in a cache memory includes determining a usable region of a non-volatile storage (NVS), determining an amount of write data in a current write request for the cache memory, and determining a failure boundary associated with the current write request. A count of the write data associated with the failure boundary is maintained. The current write request for the cache memory is rejected when a sum of the count of the write data associated with the failure boundary and the write data in the current write request exceeds a determined percentage of the usable region of the NVS.
摘要:
Method, system, and computer program product embodiments for facilitating data transfer from a write cache and NVS via a device adapter to a pool of storage devices by a processor or processors are provided. The processor(s) adaptively varies the destage rate based on the current occupancy of the NVS for a particular storage device and stage activity related to that storage device. The stage activity includes one or more of the storage device stage activity, device adapter stage activity, device adapter utilized bandwidth and the read/write speed of the storage device. These factors are generally associated with read response time in the event of a cache miss and not ordinarily associated with dynamic management of the destage rate. This combination maintains the desired overall occupancy of the NVS while improving response time performance.