Network Storage Failover Systems and Associated Methods

    公开(公告)号:US20220283915A1

    公开(公告)日:2022-09-08

    申请号:US17751944

    申请日:2022-05-24

    Applicant: NETAPP, INC.

    Abstract: Failover methods and systems for a networked storage environment are provided. In one aspect, a read request associated with a first storage object is received, during a replay of entries of a log stored in a non-volatile memory of a second storage node for a failover operation initiated in response to a failure at a first storage node. The second storage node operates as a partner node of the first storage node. The read request is processed using a filtering data structure that is generated from the log prior to the replay and identifies each log entry. The read request is processed when the log does not have an entry associated with the read request, and when the filtering data structure includes an entry associated with the read request, the requested data is located at the non-volatile memory.

    METHODS FOR COPY-FREE DATA MIGRATION ACROSS FILESYSTEMS AND DEVICES THEREOF

    公开(公告)号:US20180373457A1

    公开(公告)日:2018-12-27

    申请号:US15631296

    申请日:2017-06-23

    Applicant: NetApp, Inc.

    Abstract: Methods, non-transitory computer readable media, and computing devices that facilitate copy-free data migrations across filesystems. In a first step with this technology, a first set of filesystem metadata associated with a first filesystem is received. At least a portion of the first set of filesystem metadata is retrieved from a first data structure associated with the first filesystem. The first set of filesystem metadata includes a first identifier and a physical location associated with user data. A second identifier, associated with a second filesystem having a different addressing scheme than the first filesystem, is generated from at least the first identifier. A second set of filesystem metadata including the second identifier and the physical location is stored such that at least the second identifier is stored in a second data structure associated with the second filesystem.

    DEFRAGMENTATION FOR LOG STRUCTURED MERGE TREE TO IMPROVE READ AND WRITE AMPLIFICATION

    公开(公告)号:US20230350850A1

    公开(公告)日:2023-11-02

    申请号:US17732046

    申请日:2022-04-28

    Applicant: NetApp Inc.

    CPC classification number: G06F3/0605 G06F3/0685 G06F3/0649

    Abstract: Techniques are provided for implementing a defragmentation process during a merge operation performed by a re-compaction process upon a log structured merge tree. The log structured merge tree is used to store keys of key-value pairs within a key-value store. As the log structured merge tree fills with keys over time, the re-compaction process is performed to merge keys down to lower levels of the log structured merge tree to re-compact the keys. Re-compaction can result in fragmentation because there is a lack of spatial locality of where the re-compaction operations re-writes the keys within storage. Fragmentation increases read and write amplification when accessing the keys stored in different locations within the storage. Accordingly, the defragmentation process is performed during a last merge operation of the re-compaction process in order to store keys together within the storage, thus reducing read and write amplification when accessing the keys.

    PREFETCHING KEYS FOR GARBAGE COLLECTION
    18.
    发明公开

    公开(公告)号:US20230350610A1

    公开(公告)日:2023-11-02

    申请号:US17732065

    申请日:2022-04-28

    Applicant: NetApp Inc.

    CPC classification number: G06F3/0652 G06F3/0602 G06F3/068

    Abstract: Techniques are provided for implementing a garbage collection process and a prediction read ahead mechanism to prefetch keys into memory to improve the efficiency and speed of the garbage collection process. A log structured merge tree is used to store keys of key-value pairs within a key-value store. If a key is no longer referenced by any worker nodes of a distributed storage architecture, then the key can be freed to store other data. Accordingly, garbage collection is performed to identify and free unused keys. The speed and efficiency of garbage collection is improved by dynamically adjusting the amount and rate at which keys are prefetched from disk and cached into faster memory for processing by the garbage collection process.

Patent Agency Ranking