Abstract:
Presented herein are methods, non-transitory computer readable media, and devices for efficiently administering locks for shared resources, such as data blocks, stored on a storage system. Methods for stamping a plurality of computer data objects are disclosed which include: accessing at least one of the plurality of computer data objects by a first data thread; assigning, by the first data thread, a stamp to the at least one of the plurality of computer data objects, to signify the at least one of the plurality of computer data objects is associated with the first data thread; preventing subsequent access by a second data thread to the stamped at least one of the plurality of computer data objects; and determining the stamp is no longer active, upon an event, effectively releasing the at least one of the plurality of computer data objects.
Abstract:
Presented herein are methods, non-transitory computer readable media, and devices for integrating a hybrid model of fine-grained locking and data-partitioning wherein fine-grained locking is added to existing systems that are based on hierarchical data-partitioning in order in increase parallelism with minimal code re-write. Methods for integrating a hybrid model of fine-grained locking and data-partitioning are disclosed which include: creating, by a network storage server, a plurality of domains for execution of processes of the network storage server, the plurality of domains including a domain; creating a hierarchy of storage filesystem subdomains within the domain, wherein each of the subdomains corresponds to one or more types of processes, wherein at least one of the storage filesystem subdomains maps to a data object that is locked via fine-grained locking; and assigning processes for simultaneous execution by the storage filesystem subdomains within the domain and the at least one subdomain that maps to the data object locked via fine-grained locking.
Abstract:
Methods, non-transitory computer readable media, and devices for dynamically changing a number of partitions at runtime in a hierarchical data partitioning model include determining a number of coarse mapping objects, determining a number of fine mapping objects, and setting a number of coarse partitions and a number of fine partitions based on the determined number of coarse mapping object and the determined number of fine mapping objects.
Abstract:
Presented herein are methods, non-transitory computer readable media, and devices for integrating a workload management scheme for a file system buffer cache with a global recycle queue infrastructure. Methods for allocating a certain portion of the buffer cache without physically partitioning the buffer resources are disclosed which include: identifying a workload from a plurality of workloads; allocating the buffer cache in the data storage network for usage by the identified workload; tagging a buffer from within the buffer cache with a workload identifier and track each buffer; determining if the workload is exceeding its allocated buffer cache; and wherein determining the workload is exceeding its allocated percentage of buffer cache, enabling the workload's exceeded buffer to be available to scavenge; determining if the workload is exceeding a soft-limit on the allowable usage of the buffer cache, and wherein determining the workload is exceeding its soft-limit, degrading the prioritization of subsequent buffers, preventing the workload from thrashing out buffers of other workloads.
Abstract:
A method, non-transitory computer readable medium, and data storage computing device that obtains data to be stored in a buffer in a buffer cache, determines a priority of the buffer based on the data, identifies one of a set of global recycle queues based on the priority, and inserts the buffer and metadata into the global recycle queue. When the global recycle queue is determined to be a lowest priority global recycle queue and the buffer is determined to be a least recently used buffer, the buffer is removed from the global recycle queue and inserted into a per-thread recycle queue. When the buffer is least recently used in the per-thread recycle queue, the buffer is removed from the per-thread recycle queue and placed in a free pool. With this technology, buffer cache can be more efficiently managed, particularly with respect to aging and scavenging operations, among other advantages.
Abstract:
Presented herein are methods, non-transitory computer readable media, and devices for integrating a hybrid model of fine-grained locking and data-partitioning wherein fine-grained locking is added to existing systems that are based on hierarchical data-partitioning in order in increase parallelism with minimal code re-write. Methods for integrating a hybrid model of fine-grained locking and data-partitioning are disclosed which include: creating, by a network storage server, a plurality of domains for execution of processes of the network storage server, the plurality of domains including a domain; creating a hierarchy of storage filesystem subdomains within the domain, wherein each of the subdomains corresponds to one or more types of processes, wherein at least one of the storage filesystem subdomains maps to a data object that is locked via fine-grained locking; and assigning processes for simultaneous execution by the storage filesystem subdomains within the domain and the at least one subdomain that maps to the data object locked via fine-grained locking.
Abstract:
A method, non-transitory computer readable medium, and data storage computing device that obtains data to be stored in a buffer in a buffer cache, determines a priority of the buffer based on the data, identifies one of a set of global recycle queues based on the priority, and inserts the buffer and metadata into the global recycle queue. When the global recycle queue is determined to be a lowest priority global recycle queue and the buffer is determined to be a least recently used buffer, the buffer is removed from the global recycle queue and inserted into a per-thread recycle queue. When the buffer is least recently used in the per-thread recycle queue, the buffer is removed from the per-thread recycle queue and placed in a free pool. With this technology, buffer cache can be more efficiently managed, particularly with respect to aging and scavenging operations, among other advantages.
Abstract:
Presented herein are methods, non-transitory computer readable media, and devices for optimizing thread assignment to schedulers, avoid starvation of individual data partitions, and maximize parallelism in the presence of hierarchical data partitioning are disclosed, which include: partitioning, by a network storage server, a scheduler servicing a data partitioned system into a plurality of autonomous schedulers; determining what fraction of thread resources in the data partitioned system at least one of the plurality of autonomous schedulers is to receive; and determining, with minimal synchronization, when it is time to allow the at least one of the plurality of autonomous schedulers servicing a coarse hierarchy to run.
Abstract:
Presented herein are methods, non-transitory computer readable media, and devices for efficiently administering locks for shared resources, such as data blocks, stored on a storage system. Methods for stamping a plurality of computer data objects are disclosed which include: accessing at least one of the plurality of computer data objects by a first data thread; assigning, by the first data thread, a stamp to the at least one of the plurality of computer data objects, to signify the at least one of the plurality of computer data objects is associated with the first data thread; preventing subsequent access by a second data thread to the stamped at least one of the plurality of computer data objects; and determining the stamp is no longer active, upon an event, effectively releasing the at least one of the plurality of computer data objects.
Abstract:
Presented herein are methods, non-transitory computer readable media, and devices for integrating a workload management scheme for a file system buffer cache with a global recycle queue infrastructure. Methods for allocating a certain portion of the buffer cache without physically partitioning the buffer resources are disclosed which include: identifying a workload from a plurality of workloads; allocating the buffer cache in the data storage network for usage by the identified workload; tagging a buffer from within the buffer cache with a workload identifier and track each buffer; determining if the workload is exceeding its allocated buffer cache; and wherein determining the workload is exceeding its allocated percentage of buffer cache, enabling the workload's exceeded buffer to be available to scavenge; determining if the workload is exceeding a soft-limit on the allowable usage of the buffer cache, and wherein determining the workload is exceeding its soft-limit, degrading the prioritization of subsequent buffers, preventing the workload from thrashing out buffers of other workloads.