METHODS AND SYSTEMS FOR DYNAMIC HASHING IN CACHING SUB-SYSTEMS
    1.
    发明申请
    METHODS AND SYSTEMS FOR DYNAMIC HASHING IN CACHING SUB-SYSTEMS 审中-公开
    用于缓存子系统中动态冲击的方法和系统

    公开(公告)号:US20160103767A1

    公开(公告)日:2016-04-14

    申请号:US14510829

    申请日:2014-10-09

    Applicant: NETAPP, INC.

    CPC classification number: G06F3/067 G06F3/0611 G06F3/0638 G06F12/0868

    Abstract: Methods and systems for dynamic hashing in cache sub-systems are provided. The method includes analyzing a plurality of input/output (I/O) requests for determining a pattern indicating if the I/O requests are random or sequential; and using the pattern for dynamically changing a first input to a second input for computing a hash index value by a hashing function that is used to index into a hashing data structure to look up a cache block to cache an I/O request to read or write data, where for random I/O requests, a segment size is the first input to a hashing function to compute a first hash index value and for sequential I/O requests, a stripe size is used as the second input for computing a second hash index value.

    Abstract translation: 提供缓存子系统中动态散列的方法和系统。 该方法包括分析用于确定指示I / O请求是随机还是连续的模式的多个输入/输出(I / O)请求; 并且使用用于将第一输入动态地改变为第二输入的模式,用于通过散列函数来计算散列索引值,所述散列函数用于索引到散列数据结构以查找高速缓存块以缓存读取的I / O请求或 写数据,其中对于随机I / O请求,段大小是用于计算第一散列索引值的哈希函数的第一个输入以及对于顺序的I / O请求,条带大小用作计算第二个的第二个输入 散列索引值。

    Methods and systems for dynamic hashing in caching sub-systems

    公开(公告)号:US10481835B2

    公开(公告)日:2019-11-19

    申请号:US14510829

    申请日:2014-10-09

    Applicant: NETAPP, INC.

    Abstract: Methods and systems for dynamic hashing in cache sub-systems are provided. The method includes analyzing a plurality of input/output (I/O) requests for determining a pattern indicating if the I/O requests are random or sequential; and using the pattern for dynamically changing a first input to a second input for computing a hash index value by a hashing function that is used to index into a hashing data structure to look up a cache block to cache an I/O request to read or write data, where for random I/O requests, a segment size is the first input to a hashing function to compute a first hash index value and for sequential I/O requests, a stripe size is used as the second input for computing a second hash index value.

    Data tracking for efficient recovery of a storage array
    3.
    发明授权
    Data tracking for efficient recovery of a storage array 有权
    数据跟踪,有效恢复存储阵列

    公开(公告)号:US09547552B2

    公开(公告)日:2017-01-17

    申请号:US14567743

    申请日:2014-12-11

    Applicant: NetApp, Inc.

    Abstract: A system and method for maintaining operation of a storage array with one or more failed storage devices and for quickly recovering when failing devices are replaced are provided. In some embodiments, the method includes receiving a data transaction directed to a volume and determining that a storage device associated with the volume is inoperable. In response to determining that the storage device is inoperable, a data extent is recorded in a change log in a storage controller cache. The data extent is associated with the data transaction and allocated to the storage device that is inoperable. The data transaction is performed using at least one other storage device associated with the volume, and data allocated to the storage device is subsequently reconstructed using the recorded data extent.

    Abstract translation: 提供了一种用于维护具有一个或多个故障存储设备的存储阵列的操作并且用于在更换故障设备时快速恢复的系统和方法。 在一些实施例中,该方法包括接收指向卷的数据事务,并确定与该卷相关联的存储设备是不可操作的。 响应于确定存储设备不可操作,数据范围被记录在存储控制器高速缓存中的更改日志中。 数据范围与数据事务相关联,并分配给不可操作的存储设备。 使用与卷相关联的至少一个其他存储设备来执行数据事务,并且随后使用所记录的数据扩展来重构分配给存储设备的数据。

    GENERATING PREDICTIVE CACHE STATISTICS FOR VARIOUS CACHE SIZES
    4.
    发明申请
    GENERATING PREDICTIVE CACHE STATISTICS FOR VARIOUS CACHE SIZES 审中-公开
    为各种高速缓存大小生成预测缓存统计信息

    公开(公告)号:US20150081981A1

    公开(公告)日:2015-03-19

    申请号:US14031999

    申请日:2013-09-19

    Applicant: NetApp, Inc.

    Abstract: Technology is disclosed for generating predictive cache statistics for various cache sizes. In some embodiments, a storage controller includes a cache tracking mechanism for concurrently generating the predictive cache statistics for various cache sizes for a cache system. The cache tracking mechanism can track simulated cache blocks of a cache system using segmented cache metadata while performing an exemplary workload including various read and write requests (client-initiated I/O operations) received from client systems (or clients). The segmented cache metadata corresponds to one or more of the various cache sizes for the cache system.

    Abstract translation: 公开了用于生成各种高速缓存大小的预测高速缓存统计信息的技术。 在一些实施例中,存储控制器包括高速缓存跟踪机制,用于同时地为高速缓存系统生成各种高速缓存大小的预测高速缓存统计信息。 缓存跟踪机制可以使用分段缓存元数据跟踪缓存系统的模拟高速缓存块,同时执行包括从客户机系统(或客户机)接收的各种读取和写入请求(客户端发起的I / O操作)的示例性工作负载。 分段缓存元数据对应于缓存系统的各种高速缓存大小中的一个或多个。

    METHODS AND SYSTEMS FOR USING PREDICTIVE CACHE STATISTICS IN A STORAGE SYSTEM
    5.
    发明申请
    METHODS AND SYSTEMS FOR USING PREDICTIVE CACHE STATISTICS IN A STORAGE SYSTEM 有权
    在存储系统中使用预测高速缓存统计的方法和系统

    公开(公告)号:US20160034394A1

    公开(公告)日:2016-02-04

    申请号:US14445354

    申请日:2014-07-29

    Applicant: NETAPP, INC.

    Abstract: Method and systems for a storage system are provided. Simulated cache blocks of a cache system are tracked using cache metadata while performing a workload having a plurality of storage operations. The cache metadata is segmented, each segment corresponding to a cache size. Predictive statistics are determined for each cache size using a corresponding segment of the cache metadata. The predictive statistics are used to determine an amount of data that is written for each cache size within certain duration. The process then determines if each cache size provides an endurance level after executing a certain number of write operations, where the endurance level indicates a desired life-cycle for each cache size.

    Abstract translation: 提供了存储系统的方法和系统。 使用高速缓存元数据跟踪高速缓存系统的模拟高速缓存块,同时执行具有多个存储操作的工作负载。 高速缓存元数据被分段,每个段对应于高速缓存大小。 使用高速缓存元数据的相应段确定每个高速缓存大小的预测统计。 预测统计量用于确定在特定持续时间内为每个高速缓存大小写入的数据量。 该过程然后确定每个高速缓存大小是否在执行一定数量的写入操作之后提供耐久水平,其中耐力水平指示每个高速缓存大小的期望的生命周期。

    Storage System Multiprocessing and Mutual Exclusion in a Non-Preemptive Tasking Environment

    公开(公告)号:US20170090999A1

    公开(公告)日:2017-03-30

    申请号:US14866293

    申请日:2015-09-25

    Applicant: NetApp, Inc.

    CPC classification number: G06F9/528 G06F9/5033 G06F9/5088 G06F2209/5022

    Abstract: Selective multiprocessing in a non-preemptive task scheduling environment is provided. Tasks of an application are grouped based on similar functionality and/or access to common code or data structures. The grouped tasks constitute a task core group, and each task core group may be mapped to a core in a multi-core processing system. A mutual exclusion approach reduces overhead imposed on the storage controller and eliminates the risk of concurrent access. A core guard routine is used when a particular application task in a first task core group requires access to a section of code or data structure associated with a different task core group. The application task is temporarily assigned to the second task core group. The application task executes the portion of code seeking access to the section of code or data structure. Once complete, the application task is reassigned back to its original task core group.

    Changing storage volume ownership using cache memory

    公开(公告)号:US09836223B2

    公开(公告)日:2017-12-05

    申请号:US15142691

    申请日:2016-04-29

    Applicant: NetApp, Inc.

    Abstract: A method, a computing device, and a non-transitory machine-readable medium for changing ownership of a storage volume from a first controller to a second controller without flushing data, is provided. In the system, the first controller is associated with a first DRAM cache comprising a primary partition that stores data associated with the first controller and a mirror partition that stores data associated with the second controller. The second controller in the system is associated with a second DRAM cache comprising a primary partition that stores data associated with the second controller and the mirror partition associated with the first controller. Further, the mirror partition in the second DRAM cache stores a copy of a data in the primary partition of the first DRAM cache and the mirror partition in the first DRAM cache stores a copy of a data in the primary partition of the second DRAM cache.

    Changing Storage Volume Ownership Using Cache Memory

    公开(公告)号:US20170315725A1

    公开(公告)日:2017-11-02

    申请号:US15142691

    申请日:2016-04-29

    Applicant: NetApp, Inc.

    Abstract: A method, a computing device, and a non-transitory machine-readable medium for changing ownership of a storage volume from a first controller to a second controller without flushing data, is provided. In the system, the first controller is associated with a first DRAM cache comprising a primary partition that stores data associated with the first controller and a mirror partition that stores data associated with the second controller. The second controller in the system is associated with a second DRAM cache comprising a primary partition that stores data associated with the second controller and the mirror partition associated with the first controller. Further, the mirror partition in the second DRAM cache stores a copy of a data in the primary partition of the first DRAM cache and the mirror partition in the first DRAM cache stores a copy of a data in the primary partition of the second DRAM cache.

Patent Agency Ranking