摘要:
Storing data in a cache memory of a storage device includes providing access to a first segment of the cache memory on behalf of a first group of external host systems coupled to the storage device and providing access to a second segment of the cache memory on behalf of a second group of external host systems coupled to the storage device, where at least a portion of the second segment of the cache memory is not part of the first segment of the cache memory. In some embodiments, no portion of the second segment of the cache memory is part of the first segment. Storing data in a cache memory of a storage device may also include providing a first data structure in the first segment of the cache memory and providing a second data structure in the second segment of the cache memory, where accessing the first segment includes accessing the first data structure and accessing the second segment includes accessing the second data structure. The data structures may be doubly linked ring lists of blocks of data. Each block of data may correspond to a track on a disk drive. Different groups of external host systems may be provided with different access, priority, and level of service with respect to the different segments of the cache.
摘要:
Storing data in a cache memory includes providing a first mechanism for allowing exclusive access to a first portion of the cache memory and providing a second mechanism for allowing exclusive access to a second portion of the cache memory, where exclusive access to the first portion is independent of exclusive access to the second portion. The first and second mechanisms may be software locks. Allowing exclusive access may also include providing a first data structure in the first portion of the cache memory and providing a second data structure in the second portion of the cache memory, where accessing the first portion includes accessing the first data structure and accessing the second portion includes accessing the second data structure. The data structures may doubly linked ring lists of blocks of data and the blocks may correspond to a track on a disk drive. The technique described herein may be generalized to any number of portions.
摘要:
Storing data in a cache memory of a storage device includes providing access to a first segment of the cache memory on behalf of a first group of external host systems coupled to the storage device and providing access to a second segment of the cache memory on behalf of a second group of external host systems coupled to the storage device, where at least a portion of the second segment of the cache memory is not part of the first segment of the cache memory. In some embodiments, no portion of the second segment of the cache memory is part of the first segment. Storing data in a cache memory of a storage device may also include providing a first data structure in the first segment of the cache memory and providing a second data structure in the second segment of the cache memory, where accessing the first segment includes accessing the first data structure and accessing the second segment includes accessing the second data structure. The data structures may be doubly linked ring lists of blocks of data. Each block of data may correspond to a track on a disk drive. Different groups of external host systems may be provided with different access, priority, and level of service with respect to the different segments of the cache.
摘要:
A queued lock prioritizes access to a shared resource in a distributed system. Each unsuccessful requestor adaptively delays its next poll for the lock by a period determined as a function of its priority in the lock request queue and the average duration of a significant processor operation involving the resource.
摘要:
Accessing stored data includes providing a virtual storage area having a table of pointers that point to sections of at least two other storage areas, where the virtual storage area contains no sections of data, in response to a request for accessing data of the virtual storage area, determining which particular one of the other storage areas contain the data, and accessing the data on the particular one of the other storage areas using the table of pointers. Accessing stored data may also include associating a first one of the other storage areas with the virtual storage area, where the virtual area device represents a copy of data of the first one of the other storage areas. Accessing stored data may also include causing all of the pointers of the table to initially point to sections of the first one of the other storage areas when the virtual storage area is initially associated with the first one of the other storage areas. The storage areas may be storage devices. The sections may be tracks.
摘要:
Accessing stored data includes providing a virtual storage area having a table of pointers that point to sections of at least two other storage areas, where the virtual storage area contains no sections of data, in response to a request for accessing data of the virtual storage area, determining which particular one of the other storage areas contain the data, and accessing the data on the particular one of the other storage areas using the table of pointers. Accessing stored data may also include associating a first one of the other storage areas with the virtual storage area, where the virtual area device represents a copy of data of the first one of the other storage areas. Accessing stored data may also include causing all of the pointers of the table to initially point to sections of the first one of the other storage areas when the virtual storage area is initially associated with the first one of the other storage areas. The storage areas may be storage devices. The sections may be tracks.
摘要:
A scheduler for selecting a logical volume for job generation based on the loading of physical resources in a data storage system. The scheduler determines a job workload for each of the physical resources, selects physical resources based on the job workload and selects a logical volume supported by the selected physical resources in a balanced manner.
摘要:
Described are a storage network and method of migrating data from a source virtual array to a destination virtual array transparently with respect to a storage application executing on a host. The storage application provides particular storage functionality at a source storage array while using metadata during its execution. The metadata used by the storage application are associated with the source virtual array and forwarded, during a data migration event in which data resident in logical units of storage (LUNs) of the source virtual array are copied to LUNs of the destination virtual array, to a destination storage array where the metadata is associated with the destination virtual array.
摘要:
Failover is provided from a primary Fiber Channel device to a secondary Fiber Channel device. Source and secondary Fiber Channel devices are coupled to a Fiber Channel fabric having a database that associates Fiber Channel names and Fiber Channel addresses of Fiber Channel ports coupled to it. All data is copied from the primary Fiber Channel device to the secondary Fiber Channel device. In response to a failure, secondary port names and LUN names are replaced with the primary port names and LUN names, and the fabric updates its database so that the database associates the secondary port and LUN addresses with the primary port and LUN names. The secondary Fiber Channel device thereby assumes the primary Fiber Channel device's identity.
摘要:
Described are systems and methods of migrating data from a source virtual array to a destination virtual array transparently with respect to a management application program executing on a host and using management information to send management messages to the virtual arrays. Data from the source virtual array are copied to the destination virtual array during a data migration event. First and second virtual array management interfaces are associated with the source and destination virtual arrays, respectively. The first and second virtual array management interfaces are exchanged during the data migration event so that the virtual array management interface associated with the destination virtual array becomes associated with the source virtual array and the virtual array management interface associated with the source virtual array becomes associated with the destination virtual array.