AUTO-EXPIRING LOCKS BASED ON OBJECT STAMPING

    公开(公告)号:US20190340162A1

    公开(公告)日:2019-11-07

    申请号:US16513362

    申请日:2019-07-16

    Applicant: NETAPP, INC.

    Abstract: Presented herein are methods, non-transitory computer readable media, and devices for efficiently administering locks for shared resources, such as data blocks, stored on a storage system. Methods for stamping a plurality of computer data objects are disclosed which include: accessing at least one of the plurality of computer data objects by a first data thread; assigning, by the first data thread, a stamp to the at least one of the plurality of computer data objects, to signify the at least one of the plurality of computer data objects is associated with the first data thread; preventing subsequent access by a second data thread to the stamped at least one of the plurality of computer data objects; and determining the stamp is no longer active, upon an event, effectively releasing the at least one of the plurality of computer data objects.

    Hybrid Model Of Fine-Grained Locking And Data Partitioning

    公开(公告)号:US20240411726A1

    公开(公告)日:2024-12-12

    申请号:US18740944

    申请日:2024-06-12

    Applicant: NetApp, Inc.

    Abstract: Presented herein are methods, non-transitory computer readable media, and devices for integrating a hybrid model of fine-grained locking and data-partitioning wherein fine-grained locking is added to existing systems that are based on hierarchical data-partitioning in order in increase parallelism with minimal code re-write. Methods for integrating a hybrid model of fine-grained locking and data-partitioning are disclosed which include: creating, by a network storage server, a plurality of domains for execution of processes of the network storage server, the plurality of domains including a domain; creating a hierarchy of storage filesystem subdomains within the domain, wherein each of the subdomains corresponds to one or more types of processes, wherein at least one of the storage filesystem subdomains maps to a data object that is locked via fine-grained locking; and assigning processes for simultaneous execution by the storage filesystem subdomains within the domain and the at least one subdomain that maps to the data object locked via fine-grained locking.

    Adaptive data-partitioning model that responds to observed workload

    公开(公告)号:US10140021B2

    公开(公告)日:2018-11-27

    申请号:US14757430

    申请日:2015-12-23

    Applicant: NetApp, Inc.

    Abstract: Methods, non-transitory computer readable media, and devices for dynamically changing a number of partitions at runtime in a hierarchical data partitioning model include determining a number of coarse mapping objects, determining a number of fine mapping objects, and setting a number of coarse partitions and a number of fine partitions based on the determined number of coarse mapping object and the determined number of fine mapping objects.

    WORKLOAD MANAGEMENT IN A GLOBAL RECYCLE QUEUE INFRASTRUCTURE
    4.
    发明申请
    WORKLOAD MANAGEMENT IN A GLOBAL RECYCLE QUEUE INFRASTRUCTURE 有权
    全球循环系统基础设施中的工作流管理

    公开(公告)号:US20170060753A1

    公开(公告)日:2017-03-02

    申请号:US14839450

    申请日:2015-08-28

    Applicant: NetApp, Inc.

    Abstract: Presented herein are methods, non-transitory computer readable media, and devices for integrating a workload management scheme for a file system buffer cache with a global recycle queue infrastructure. Methods for allocating a certain portion of the buffer cache without physically partitioning the buffer resources are disclosed which include: identifying a workload from a plurality of workloads; allocating the buffer cache in the data storage network for usage by the identified workload; tagging a buffer from within the buffer cache with a workload identifier and track each buffer; determining if the workload is exceeding its allocated buffer cache; and wherein determining the workload is exceeding its allocated percentage of buffer cache, enabling the workload's exceeded buffer to be available to scavenge; determining if the workload is exceeding a soft-limit on the allowable usage of the buffer cache, and wherein determining the workload is exceeding its soft-limit, degrading the prioritization of subsequent buffers, preventing the workload from thrashing out buffers of other workloads.

    Abstract translation: 这里呈现的是用于将用于文件系统缓冲器高速缓存的工作负载管理方案与全局循环队列基础设施集成的方法,非暂时计算机可读介质和设备。 公开了用于分配缓冲区高速缓存器的某一部分而不物理划分缓冲器资源的方法,其包括:从多个工作负载识别工作负载; 在数据存储网络中分配缓冲区高速缓存以供所识别的工作负载使用; 使用工作负载标识符从缓冲区高速缓存中标记缓冲区并跟踪每个缓冲区; 确定工作负载是否超过其分配的缓冲区高速缓存; 并且其中确定所述工作负载超过其分配的缓冲器高速缓存的百分比,使得所述工作负载的超出缓冲器能够被清除; 确定所述工作负载是否超过对所述缓冲器高速缓存的允许使用的软限制,并且其中确定所述工作负载超过其软限制,降低后续缓冲器的优先级,从而防止所述工作负载颠倒其他工作负载的缓冲区。

    METHODS FOR MANAGING A BUFFER CACHE AND DEVICES THEREOF
    5.
    发明申请
    METHODS FOR MANAGING A BUFFER CACHE AND DEVICES THEREOF 审中-公开
    用于管理缓冲区缓存的方法及其设备

    公开(公告)号:US20160371225A1

    公开(公告)日:2016-12-22

    申请号:US14743322

    申请日:2015-06-18

    Applicant: NetApp, Inc.

    CPC classification number: G06F15/167 H04L49/90 H04L67/1097

    Abstract: A method, non-transitory computer readable medium, and data storage computing device that obtains data to be stored in a buffer in a buffer cache, determines a priority of the buffer based on the data, identifies one of a set of global recycle queues based on the priority, and inserts the buffer and metadata into the global recycle queue. When the global recycle queue is determined to be a lowest priority global recycle queue and the buffer is determined to be a least recently used buffer, the buffer is removed from the global recycle queue and inserted into a per-thread recycle queue. When the buffer is least recently used in the per-thread recycle queue, the buffer is removed from the per-thread recycle queue and placed in a free pool. With this technology, buffer cache can be more efficiently managed, particularly with respect to aging and scavenging operations, among other advantages.

    Abstract translation: 获取要存储在缓冲器高速缓冲存储器中的数据的方法,非暂时性计算机可读介质和数据存储计算设备,基于该数据确定缓冲器的优先级,识别一组全局循环队列中的一个, 优先级,并将缓冲区和元数据插入到全局回收队列中。 当全局回收队列被确定为最低优先级的全局回收队列并且缓冲器被确定为最近最少使用的缓冲器时,将该缓冲器从全局回收队列中移除并插入到每线程循环队列中。 当缓冲区最近在每个线程循环队列中使用时,缓冲区将从每线程循环队列中删除并放置在一个空闲池中。 利用这种技术,可以更有效地管理缓冲区缓存,特别是在老化和清除操作方面以及其他优点。

    Hybrid model of fine-grained locking and data partitioning

    公开(公告)号:US11301430B2

    公开(公告)日:2022-04-12

    申请号:US16562852

    申请日:2019-09-06

    Applicant: NetApp Inc.

    Abstract: Presented herein are methods, non-transitory computer readable media, and devices for integrating a hybrid model of fine-grained locking and data-partitioning wherein fine-grained locking is added to existing systems that are based on hierarchical data-partitioning in order in increase parallelism with minimal code re-write. Methods for integrating a hybrid model of fine-grained locking and data-partitioning are disclosed which include: creating, by a network storage server, a plurality of domains for execution of processes of the network storage server, the plurality of domains including a domain; creating a hierarchy of storage filesystem subdomains within the domain, wherein each of the subdomains corresponds to one or more types of processes, wherein at least one of the storage filesystem subdomains maps to a data object that is locked via fine-grained locking; and assigning processes for simultaneous execution by the storage filesystem subdomains within the domain and the at least one subdomain that maps to the data object locked via fine-grained locking.

    Methods for managing a buffer cache and devices thereof

    公开(公告)号:US10606795B2

    公开(公告)日:2020-03-31

    申请号:US14743322

    申请日:2015-06-18

    Applicant: NetApp, Inc

    Abstract: A method, non-transitory computer readable medium, and data storage computing device that obtains data to be stored in a buffer in a buffer cache, determines a priority of the buffer based on the data, identifies one of a set of global recycle queues based on the priority, and inserts the buffer and metadata into the global recycle queue. When the global recycle queue is determined to be a lowest priority global recycle queue and the buffer is determined to be a least recently used buffer, the buffer is removed from the global recycle queue and inserted into a per-thread recycle queue. When the buffer is least recently used in the per-thread recycle queue, the buffer is removed from the per-thread recycle queue and placed in a free pool. With this technology, buffer cache can be more efficiently managed, particularly with respect to aging and scavenging operations, among other advantages.

    Efficient distributed scheduler for a data partitioned system

    公开(公告)号:US10521269B2

    公开(公告)日:2019-12-31

    申请号:US15583932

    申请日:2017-05-01

    Applicant: NETAPP, INC.

    Abstract: Presented herein are methods, non-transitory computer readable media, and devices for optimizing thread assignment to schedulers, avoid starvation of individual data partitions, and maximize parallelism in the presence of hierarchical data partitioning are disclosed, which include: partitioning, by a network storage server, a scheduler servicing a data partitioned system into a plurality of autonomous schedulers; determining what fraction of thread resources in the data partitioned system at least one of the plurality of autonomous schedulers is to receive; and determining, with minimal synchronization, when it is time to allow the at least one of the plurality of autonomous schedulers servicing a coarse hierarchy to run.

    Auto-expiring locks based on object stamping

    公开(公告)号:US10452633B2

    公开(公告)日:2019-10-22

    申请号:US14928481

    申请日:2015-10-30

    Applicant: NetApp, Inc.

    Abstract: Presented herein are methods, non-transitory computer readable media, and devices for efficiently administering locks for shared resources, such as data blocks, stored on a storage system. Methods for stamping a plurality of computer data objects are disclosed which include: accessing at least one of the plurality of computer data objects by a first data thread; assigning, by the first data thread, a stamp to the at least one of the plurality of computer data objects, to signify the at least one of the plurality of computer data objects is associated with the first data thread; preventing subsequent access by a second data thread to the stamped at least one of the plurality of computer data objects; and determining the stamp is no longer active, upon an event, effectively releasing the at least one of the plurality of computer data objects.

    Workload management in a global recycle queue infrastructure

    公开(公告)号:US09996470B2

    公开(公告)日:2018-06-12

    申请号:US14839450

    申请日:2015-08-28

    Applicant: NetApp, Inc.

    Abstract: Presented herein are methods, non-transitory computer readable media, and devices for integrating a workload management scheme for a file system buffer cache with a global recycle queue infrastructure. Methods for allocating a certain portion of the buffer cache without physically partitioning the buffer resources are disclosed which include: identifying a workload from a plurality of workloads; allocating the buffer cache in the data storage network for usage by the identified workload; tagging a buffer from within the buffer cache with a workload identifier and track each buffer; determining if the workload is exceeding its allocated buffer cache; and wherein determining the workload is exceeding its allocated percentage of buffer cache, enabling the workload's exceeded buffer to be available to scavenge; determining if the workload is exceeding a soft-limit on the allowable usage of the buffer cache, and wherein determining the workload is exceeding its soft-limit, degrading the prioritization of subsequent buffers, preventing the workload from thrashing out buffers of other workloads.

Patent Agency Ranking