摘要:
For each high-level protocol, a respective mesh of Transmission Control Protocol (TCP) connections is set up for a cluster of server computers for the forwarding of client requests. Each mesh has a respective pair of TCP connections in opposite directions between each pair of server computers in the cluster. The high-level protocols, for example, include the Network File System (NFS) protocol, and the Common Internet File System (CIFS) protocol. Each mesh can be shared among multiple clients because there is no need for maintenance of separate TCP connection state for each client. The server computers may use Remote Procedure Call (RPC) semantics for the forwarding of the client requests, and prior to the forwarding of a client request, a new unique transaction ID can substituted for an original transaction ID in the client request so that forwarded requests have unique transaction IDs.
摘要:
A protocol is provided for allocating file locking tasks between primary and secondary data mover computers in a network file server. When there is frequent read access and infrequent write access to a file, a primary data mover grants read locks to the entire file to secondary data movers, and the secondary data movers grant read locks to clients requesting read access. When write access to the file is needed, the read locks to the entire file are released and the read locks granted to the clients are released or expire or are demoted to non-conflicting byte range locks managed by the primary data mover. Concurrent read and write access to the same file is then managed by the primary data mover.
摘要:
Embodiments of the present invention provide a method of managing access of multiple client computers to a storage system that supports a limited number of logins. The method comprises, in response to a request to enable a subset of the clients to access resources of the storage system to perform a task, automatically configuring the storage system to provide the subset of the clients access to the resources, and, when the task is completed, automatically re-configuring the storage system so that the subset of the clients is no longer provided with access to the resources of the storage system.
摘要:
Based on a count of the number of dirty pages in a cache memory, the dirty pages are written from the cache memory to a storage array at a rate having a component proportional to the rate of change in the number of dirty pages in the cache memory. For example, a desired flush rate is computed by adding a first term to a second term. The first term is proportional to the rate of change in the number of dirty pages in the cache memory, and the second term is proportional to the number of dirty pages in the cache memory. The rate component has a smoothing effect on incoming I/O bursts and permits cache flushing to occur at a higher rate closer to the maximum storage array throughput without a significant detrimental impact on client application performance.
摘要:
Servers in a storage system store a nested multilayer directory structure, and a global index that is an abstract of the directory structure. The global index identifies respective portions of the directory structure that are stored in respective ones of the servers, and the global index identifies paths through the directory structure linking the respective portions. Upon performing a top-down search of the directory structure in response to a client request and finding that a portion of it is offline, the global index is searched to discover portions of the directory structure that are located below the offline portion. The global index may also identify the respective server storing each of the respective portions of the directory structure, and may indicate whether or not each of the respective portions of the directory structure is known to be offline.
摘要:
Embodiments of the present invention are directed to techniques for selecting a data path over which to exchange information between a client device and a storage system by making a selection between a file system server (NAS) data path type (a first data path type) and a direct (SAN) data path type (a second data path type) based on one or more adjustable path selection factors and/or information regarding components of the computer system. For example, a data path may be selected based on a type of an input/output operation to be executed (i.e., whether the operation is a read operation or write operation) and/or any other suitable path selection factor.
摘要:
A write interface in a file server provides permission management for concurrent access to data blocks of a file, ensures correct use and update of indirect blocks in a tree of the file, preallocates file blocks when the file is extended, solves access conflicts for concurrent reads and writes to the same block, and permits the use of pipelined processors. For example, a write operation includes obtaining a per file allocation mutex (mutually exclusive lock), preallocating a metadata block, releasing the allocation mutex, issuing an asynchronous write request for writing to the file, waiting for the asynchronous write request to complete, obtaining the allocation mutex, committing the preallocated metadata block, and releasing the allocation mutex. Since no locks are held during the writing of data to the on-disk storage and this data write takes the majority of the time, the method enhances concurrency while maintaining data integrity.
摘要:
Network servers in a cluster share the same network protocol address for incoming client requests, and in a data link layer protocol a reply of a client to a request from a server is returned to this same server. For example: (1) ports of the servers are clustered into one single network channel used for incoming and outgoing requests to and from the servers; or (2) ports of the servers are clustered into one single network channel used for incoming requests to the servers and a separate port of each of the servers is used for outgoing requests from each of the servers; or (3) logical ports of the servers are clustered into one network channel used for requests to the servers and a separate logical port of each of the servers is used for outgoing requests from each of the servers.
摘要:
A shallow file is adapted for intensive read-only access to data of a primary file. The primary file resides in another file system or file server. The shallow file includes the data block mapping metadata of the primary file and a link to the primary file. To open the shallow file, the file system manager of the shallow file obtains a read lock on the primary file from the file system manager of the primary file. Then the file system manager of the shallow file may use the data block mapping in the shallow file to access the file data from the primary file in storage without participation of the file system manager of the primary file. This permits offloading of data protection services for secure and efficient storage of a backup copy of the file data.
摘要:
Transform coefficients for blocks of pixels in an original picture are quantized to produce respective sets of quantization indices for the blocks of pixels. The quantization indices for at least some of the blocks are produced by using a quantization step size that is not uniform within each block. Largest magnitude quantization indices are selected from the respective sets of quantization indices for (run, level) encoding to produce the (run, level) encoded picture. For example, MPEG-2 coded video includes a set of non-zero AC discrete cosine transform (DCT) coefficients for 8×8 blocks of pixels. For scaling the MPEG-2 coded video, non-zero AC DCT coefficients are removed from the MPEG-2 coded video to produce reduced-quality MPEG-2 coded video that includes no more than a selected number of largest magnitude quantization indices for the non-zero AC DCT coefficients for each 8×8 block.