Distributed File System with Reduced Write and Read Latencies

    公开(公告)号:US20250068600A1

    公开(公告)日:2025-02-27

    申请号:US18942186

    申请日:2024-11-08

    Applicant: NetApp, Inc.

    Abstract: A method for reducing write latency in a distributed file system. A write request that includes a volume identifier is received at a data management subsystem deployed on a node within a distributed storage system. The data management subsystem maps the volume identifier to a file system volume and maps the file system volume to a set of logical block addresses in a logical block device in a storage management subsystem deployed on the node. The storage management subsystem maps the logical block device to a metadata object for the logical block device on the node that is used to process the write request. The mapping of the file system volume to the set of logical block addresses in the logical block device enables co-locating the metadata object with the file system volume on the node, which reduces the write latency associated with processing the write request.

    Distributed File System with Reduced Write and Read Latencies

    公开(公告)号:US20220391361A1

    公开(公告)日:2022-12-08

    申请号:US17449760

    申请日:2021-10-01

    Applicant: NetApp, Inc.

    Abstract: A method for reducing write latency in a distributed file system. A write request that includes a volume identifier is received at a data management subsystem deployed on a node within a distributed storage system. The data management subsystem maps the volume identifier to a file system volume and maps the file system volume to a set of logical block addresses in a logical block device in a storage management subsystem deployed on the node. The storage management subsystem maps the logical block device to a metadata object for the logical block device on the node that is used to process the write request. The mapping of the file system volume to the set of logical block addresses in the logical block device enables co-locating the metadata object with the file system volume on the node, which reduces the write latency associated with processing the write request.

    Balanced, Opportunistic Multicore I/O Scheduling From Non-SMP Applications

    公开(公告)号:US20180113738A1

    公开(公告)日:2018-04-26

    申请号:US15497744

    申请日:2017-04-26

    Applicant: NETAPP, INC.

    CPC classification number: G06F9/5027

    Abstract: A system for dynamically configuring and scheduling input/output (I/O) workloads among processing cores is disclosed. Resources for an application that are related to each other and/or not multicore safe are grouped together into work nodes. When these need to be executed, the work nodes are added to a global queue that is accessible by all of the processing cores. Any processing core that becomes available can pull and process the next available work node through to completion, so that the work associated with that work node software object is all completed by the same core, without requiring additional protections for resources that are not multicore safe. Indexes track the location of both the next work node in the global queue for processing and the next location in the global queue for new work nodes to be added for subsequent processing.

    Migration Between CPU Cores
    5.
    发明申请
    Migration Between CPU Cores 有权
    CPU内核之间的迁移

    公开(公告)号:US20170060624A1

    公开(公告)日:2017-03-02

    申请号:US14836331

    申请日:2015-08-26

    Applicant: NetApp, Inc.

    CPC classification number: G06F9/4812 G06F9/4856 G06F9/5088 G06F13/24

    Abstract: A method for migration of operations between CPU cores, the method includes: processing, by a source core, one or more tasks and one or more interrupt service routines; accessing a mapping corresponding to a task of the one or more tasks and an interrupt service routine of the one or more interrupt service routines; identifying, based on the mapping, a target core that corresponds to the task and the interrupt service routine; blocking the task from being processed by the source core in response to identifying the target core; in response to identifying the target core, disabling an interrupt corresponding to the interrupt service routine; in response to identifying the target core, assigning the task and the interrupt to the target core; after assigning the interrupt to the target core, enabling the interrupt; and after assigning the task to the target core, processing the task by the target core.

    Abstract translation: 一种用于在CPU核心之间迁移操作的方法,所述方法包括:由源核心处理一个或多个任务和一个或多个中断服务例程; 访问对应于所述一个或多个任务的任务的映射和所述一个或多个中断服务例程的中断服务程序; 基于所述映射来识别与所述任务和所述中断服务程序相对应的目标核心; 响应于识别目标核心,阻止任务被源核心处理; 响应于识别目标核心,禁用与中断服务程序相对应的中断; 响应于识别目标核心,将任务和中断分配给目标核心; 在将中断分配给目标内核后,启用中断; 并且在将任务分配给目标核心之后,由目标核心处理该任务。

    METHODS AND SYSTEMS FOR DYNAMIC HASHING IN CACHING SUB-SYSTEMS
    6.
    发明申请
    METHODS AND SYSTEMS FOR DYNAMIC HASHING IN CACHING SUB-SYSTEMS 审中-公开
    用于缓存子系统中动态冲击的方法和系统

    公开(公告)号:US20160103767A1

    公开(公告)日:2016-04-14

    申请号:US14510829

    申请日:2014-10-09

    Applicant: NETAPP, INC.

    CPC classification number: G06F3/067 G06F3/0611 G06F3/0638 G06F12/0868

    Abstract: Methods and systems for dynamic hashing in cache sub-systems are provided. The method includes analyzing a plurality of input/output (I/O) requests for determining a pattern indicating if the I/O requests are random or sequential; and using the pattern for dynamically changing a first input to a second input for computing a hash index value by a hashing function that is used to index into a hashing data structure to look up a cache block to cache an I/O request to read or write data, where for random I/O requests, a segment size is the first input to a hashing function to compute a first hash index value and for sequential I/O requests, a stripe size is used as the second input for computing a second hash index value.

    Abstract translation: 提供缓存子系统中动态散列的方法和系统。 该方法包括分析用于确定指示I / O请求是随机还是连续的模式的多个输入/输出(I / O)请求; 并且使用用于将第一输入动态地改变为第二输入的模式,用于通过散列函数来计算散列索引值,所述散列函数用于索引到散列数据结构以查找高速缓存块以缓存读取的I / O请求或 写数据,其中对于随机I / O请求,段大小是用于计算第一散列索引值的哈希函数的第一个输入以及对于顺序的I / O请求,条带大小用作计算第二个的第二个输入 散列索引值。

    Distributed File System that Provides Scalability and Resiliency

    公开(公告)号:US20240370410A1

    公开(公告)日:2024-11-07

    申请号:US18773483

    申请日:2024-07-15

    Applicant: NetApp, Inc.

    Abstract: In various examples, data storage is managed using a distributed storage management system that is resilient. Data blocks of a logical block device may be distributed across multiple nodes in a cluster. The logical block device may correspond to a file system volume associated with a file system instance deployed on a selected node within a distributed block layer of a distributed file system. Each data block may have a location in the cluster identified by a block identifier associated with each data block. Each data block may be replicated on at least one other node in the cluster. A metadata object corresponding to a logical block device that maps to the file system volume may be replicated on at least another node in the cluster. Each data block and the metadata object may be hosted on virtualized storage that is protected using redundant array independent disks (RAID).

    Multiprocessing Within a Storage Array System Executing Controller Firmware Designed for a Uniprocessor Environment
    8.
    发明申请
    Multiprocessing Within a Storage Array System Executing Controller Firmware Designed for a Uniprocessor Environment 审中-公开
    存储阵列系统内的多处理执行为单处理器环境设计的控制器固件

    公开(公告)号:US20170031699A1

    公开(公告)日:2017-02-02

    申请号:US14811972

    申请日:2015-07-29

    Applicant: NetApp, Inc.

    Abstract: Systems, devices, and methods are provided for sharing host resources in a multiprocessor storage array, the multiprocessor storage array running controller firmware designed for a uniprocessor environment. In some aspects, one or more virtual machines can be initialized by a virtual machine manager or a hypervisor in the storage array system. Each of the one or more virtual machines implement an instance of the controller firmware designed for a uniprocessor environment. The virtual machine manager or hypervisor can assign processing devices within the storage array system to each of the one or more virtual machines. The virtual machine manager or hypervisor can also assign virtual functions to each of the virtual machines. The virtual machines can concurrently access one or more I/O devices, such as physical storage devices, by writing to and reading from the respective virtual functions.

    Abstract translation: 提供系统,设备和方法用于在多处理器存储阵列中共享主机资源,多处理器存储阵列运行为单处理器环境设计的控制器固件。 在一些方面,可以由存储阵列系统中的虚拟机管理器或管理程序来初始化一个或多个虚拟机。 一个或多个虚拟机中的每一个实现为单处理器环境设计的控制器固件的实例。 虚拟机管理器或管理程序可以将存储阵列系统内的处理设备分配给一个或多个虚拟机中的每一个。 虚拟机管理器或管理程序也可以为每个虚拟机分配虚拟功能。 虚拟机可以通过写入和读取各个虚拟功能来同时访问诸如物理存储设备的一个或多个I / O设备。

    METHODS AND SYSTEMS FOR CACHE MANAGEMENT IN STORAGE SYSTEMS
    9.
    发明申请
    METHODS AND SYSTEMS FOR CACHE MANAGEMENT IN STORAGE SYSTEMS 有权
    存储系统中缓存管理的方法与系统

    公开(公告)号:US20160103764A1

    公开(公告)日:2016-04-14

    申请号:US14510785

    申请日:2014-10-09

    Applicant: NETAPP, INC.

    Abstract: Methods and systems for managing caching mechanisms in storage systems are provided where a global cache management function manages multiple independent cache pools and a global cache pool. As an example, the method includes: splitting a cache storage into a plurality of independently operating cache pools, each cache pool comprising storage space for storing a plurality of cache blocks for storing data related to an input/output (“I/O”) request and metadata associated with each cache pool; receiving the I/O request for writing a data; operating a hash function on the I/O request to assign the I/O request to one of the plurality of cache pools; and writing the data of the I/O request to one or more of the cache blocks associated with the assigned cache pool. In an aspect, this allows efficient I/O processing across multiple processors simultaneously.

    Abstract translation: 提供了用于管理存储系统中的缓存机制的方法和系统,其中全局高速缓存管理功能管理多个独立的缓存池和全局缓存池。 作为示例,该方法包括:将高速缓存存储器分割成多个独立操作的高速缓存池,每个高速缓存池包括用于存储用于存储与输入/输出(“I / O”)有关的数据的多个高速缓存块的存储空间, 与每个缓存池相关联的请求和元数据; 接收写入数据的I / O请求; 在所述I / O请求上操作散列函数以将所述I / O请求分配给所述多个缓存池中的一个; 以及将I / O请求的数据写入与所分配的高速缓存池相关联的一个或多个缓存块。 在一个方面,这允许同时跨多个处理器进行高效的I / O处理。

    DYNAMICALLY SCALING APPLICATION AND STORAGE SYSTEM FUNCTIONS BASED ON A HETEROGENEOUS RESOURCE POOL AVAILABLE FOR USE BY A DISTRIBUTED STORAGE SYSTEM

    公开(公告)号:US20240427799A1

    公开(公告)日:2024-12-26

    申请号:US18820543

    申请日:2024-08-30

    Applicant: NetApp, Inc.

    Abstract: Systems and methods for scaling application and/or storage system functions of a distributed storage system based on a heterogeneous resource pool are provided. According to one embodiment, the distributed storage system has a composable, service-based architecture that provides scalability, resiliency, and load balancing. The distributed storage system includes a cluster of nodes each potentially having differing capabilities in terms of processing, memory, and/or storage. The distributed storage system takes advantage of different types of nodes by selectively instating appropriate services (e.g., file and volume services and/or block and storage management services) on the nodes based on their respective capabilities. Furthermore, disaggregation of these services, facilitated by interposing a frictionless layer (e.g., in the form of one or more globally accessible logical disks) therebetween, enables independent and on-demand scaling of either or both of application and storage system functions within the cluster while making use of the heterogeneous resource pool.

Patent Agency Ranking