AUTOMATED PRIORITY RESTORES
    1.
    发明申请
    AUTOMATED PRIORITY RESTORES 有权
    自动优先恢复

    公开(公告)号:US20070294320A1

    公开(公告)日:2007-12-20

    申请号:US11746399

    申请日:2007-05-09

    IPC分类号: G06F17/30

    摘要: A priority restore agent in a data storage system generates a priority restore data set for a client computer system or device by identifying a set of active data sets and/or a set of key data sets within client system data generated by the client computer system. The priority restore agent looks at or processes file system attributes for the client system data and compares these attributes with predefined restore parameters. The restore parameters may indicate that any file that has been accessed, modified, or created within a particular period of time be included in the priority restore data set. The key data sets may be identified in a set of automated restore rules. A data protection application within the data storage system can restore data in the priority restore data set onto the client computer system after a disaster or system crash.

    摘要翻译: 数据存储系统中的优先级还原代理通过在客户端计算机系统生成的客户端系统数据内识别一组活动数据集和/或一组密钥数据集,来为客户端计算机系统或设备生成优先级恢复数据集。 优先级还原代理查看或处理客户端系统数据的文件系统属性,并将这些属性与预定义的恢复参数进行比较。 恢复参数可以指示在特定时间段内已被访问,修改或创建的任何文件被包括在优先级恢复数据集中。 可以在一组自动恢复规则中标识关键数据集。 数据存储系统中的数据保护应用程序可以在灾难或系统崩溃后将优先级还原数据集中的数据恢复到客户端计算机系统上。

    Automated priority restores
    2.
    发明授权
    Automated priority restores 有权
    自动优先恢复

    公开(公告)号:US08065273B2

    公开(公告)日:2011-11-22

    申请号:US11746399

    申请日:2007-05-09

    IPC分类号: G06F17/00 G06F7/00

    摘要: A priority restore agent in a data storage system generates a priority restore data set for a client computer system or device by identifying a set of active data sets and/or a set of key data sets within client system data generated by the client computer system. The priority restore agent looks at or processes file system attributes for the client system data and compares these attributes with predefined restore parameters. The restore parameters may indicate that any file that has been accessed, modified, or created within a particular period of time be included in the priority restore data set. The key data sets may be identified in a set of automated restore rules. A data protection application within the data storage system can restore data in the priority restore data set onto the client computer system after a disaster or system crash.

    摘要翻译: 数据存储系统中的优先级还原代理通过在客户端计算机系统生成的客户端系统数据内识别一组活动数据集和/或一组密钥数据集,来为客户端计算机系统或设备生成优先级恢复数据集。 优先级还原代理查看或处理客户端系统数据的文件系统属性,并将这些属性与预定义的恢复参数进行比较。 恢复参数可以指示在特定时间段内已被访问,修改或创建的任何文件被包括在优先级恢复数据集中。 可以在一组自动恢复规则中标识关键数据集。 数据存储系统中的数据保护应用程序可以在灾难或系统崩溃后将优先级还原数据集中的数据恢复到客户端计算机系统上。

    System and method for backup by inode number
    3.
    发明授权
    System and method for backup by inode number 有权
    通过索引节点号备份的系统和方法

    公开(公告)号:US08606751B1

    公开(公告)日:2013-12-10

    申请号:US12643109

    申请日:2009-12-21

    IPC分类号: G06F7/00 G06F17/00

    摘要: This disclosure is describes a system and method for organizing and storing backup data by inode number. Data objects on a file system may be streamed to a backup client that identifies the inode numbers for each streamed data object before streaming the objects to storage. The inode numbers are parsed to create one or more inode directories that can be browsed during a recovery process. In this fashion, the file system can be quickly backed up without requiring the backup client to determine the file system's directory hierarchy.

    摘要翻译: 本公开描述了通过索引节点号来组织和存储备份数据的系统和方法。 文件系统上的数据对象可以被流式传输到备份客户端,该备份客户端在将对象流传输到存储之前识别每个流式数据对象的索引节点编号。 索引节点号被解析为创建一个或多个inode目录,可以在恢复过程中浏览。 以这种方式,可以快速备份文件系统,而不需要备份客户端来确定文件系统的目录层次结构。

    View generator for managing data storage

    公开(公告)号:US09684739B1

    公开(公告)日:2017-06-20

    申请号:US11747567

    申请日:2007-05-11

    IPC分类号: G06F7/00 G06F17/30

    CPC分类号: G06F17/30979 G06F11/1448

    摘要: Views of files in an archival data storage system are generated by a backup view generator. A storage application generates and stores archival data in an archive system, the archival data corresponding to client data stored on a server or in memory associated with one or more client nodes. The storage application also generates backup files of the archival data which may be stored in a local memory. A set of metadata attributes are associated with each of the backup files. The backup views are generated by comparing metadata values in a view definition file to the sets of attributes associated with the backup files. Generated backup views can be exported for processing, including searching the backup views or displaying the backup views in a user interface.

    Journaled data backup during server quiescence or unavailability
    5.
    发明授权
    Journaled data backup during server quiescence or unavailability 有权
    服务器暂停或不可用期间的日志数据备份

    公开(公告)号:US07680998B1

    公开(公告)日:2010-03-16

    申请号:US11757280

    申请日:2007-06-01

    IPC分类号: G06F12/00

    摘要: A backup is performed by a client at a time when a backup server is unable to process the backup. The client maintains a cache including a root tag vector and hash entries. The client begins a backup by writing the root tag vector to a journal file and breaking files into pieces. For each piece, the client performs a hash and compares the resulting hash to entries in the cache. If the hash does not match any entries, the client records a request in the journal file to add the corresponding piece of data to an archive. After completing the backup, the journal file can be sent to the server. Before processing the journal file, the server validates the root tag vector. If the root tag vector is valid, the server processes each of the requests to add data. Otherwise, the server discards the journal file.

    摘要翻译: 在备份服务器无法处理备份时,客户端执行备份。 客户端维护包括根标签向量和哈希条目的缓存。 客户端通过将根标签向量写入日志文件并将文件分解成碎片来开始备份。 对于每个部分,客户端执行散列,并将生成的散列与缓存中的条目进行比较。 如果哈希与任何条目不匹配,客户端会在日志文件中记录一个请求,以将相应的数据段添加到归档。 完成备份后,日志文件可以发送到服务器。 在处理日志文件之前,服务器验证根标签向量。 如果根标签向量有效,则服务器处理每个请求以添加数据。 否则,服务器将丢弃日志文件。

    Merging of incremental data streams with prior backed-up data
    6.
    发明授权
    Merging of incremental data streams with prior backed-up data 有权
    将增量数据流与先前备份的数据合并

    公开(公告)号:US07797279B1

    公开(公告)日:2010-09-14

    申请号:US11968061

    申请日:2007-12-31

    IPC分类号: G06F17/30

    CPC分类号: G06F11/1451

    摘要: New full backups are generated by combining an incremental backup with a previous full backup. A previous full backup is stored in a backup server in a hash file system format. A file server generates an incremental backup of a data set on the file server by identifying and dumping files/directories of the data set that are new/modified into a tar file that is sent to an accelerator. The accelerator parses the incremental backup tar file and converts it to a hash file system format that includes metadata, hash values, and new/modified data atomics. The accelerator merges the incremental backup into the previous full backup to generate a new full backup by altering metadata and hash values of a copy of the previous full backup such that the resulting metadata and hash values describe and point to new/modified directories and files as well as unmodified directories and files.

    摘要翻译: 通过将增量备份与先前的完整备份进行组合来生成新的完整备份。 以前的完整备份以哈希文件系统格式存储在备份服务器中。 文件服务器通过将新/修改的数据集的文件/目录标识并转储到发送到加速器的tar文件中来生成文件服务器上数据集的增量备份。 加速器解析增量备份tar文件,并将其转换为散列文件系统格式,包括元数据,哈希值和新/修改的数据原子。 加速器将增量备份合并到先前的完整备份中,以通过更改先前完整备份副本的元数据和散列值来生成新的完整备份,以便生成的元数据和散列值描述并指向新/修改的目录和文件为 以及未修改的目录和文件。

    User-specific hash authentication
    7.
    发明授权
    User-specific hash authentication 有权
    用户特定的散列认证

    公开(公告)号:US08621240B1

    公开(公告)日:2013-12-31

    申请号:US11968045

    申请日:2007-12-31

    IPC分类号: E21B15/04

    CPC分类号: G06F21/6272 G06F11/1469

    摘要: Backup data in a single-instance storage device is accessed through a backup server using hashes representative of and pointing to the backup data. To prevent unauthorized access, the server provides each client with encrypted versions of hashes corresponding to data backed up by the client. The hashes can be encrypted using client-specific symmetric encryption keys known to the server. To request data, a client provides the backup server with a corresponding encrypted hash. The backup server decrypts the encrypted hash using the client's encryption key. The original hash is only obtained if the key used for decryption is identical to the key used for encryption. Consequently, if an encrypted hash is stolen or otherwise acquired by a client different from the client that backed up the corresponding data, it cannot be used by the different client to request the corresponding data from the backup server.

    摘要翻译: 通过备份服务器访问单实例存储设备中的备份数据,使用表示并指向备份数据的哈希值。 为了防止未经授权的访问,服务器为每个客户端提供与由客户端备份的数据相对应的哈希的加密版本。 可以使用服务器已知的特定于客户端的对称加密密钥对散列进行加密。 要请求数据,客户端向备份服务器提供相应的加密散列。 备份服务器使用客户端的加密密钥解密加密散列。 仅当用于解密的密钥与用于加密的密钥相同时,才会获得原始散列。 因此,如果不同于备份相应数据的客户端的客户端窃取或以其他方式获取加密散列,则不能由不同的客户端使用来自备份服务器的对应数据。

    Age-out selection in hash caches
    8.
    发明授权
    Age-out selection in hash caches 有权
    哈希缓存中的老化选择

    公开(公告)号:US08825971B1

    公开(公告)日:2014-09-02

    申请号:US11967871

    申请日:2007-12-31

    IPC分类号: G06F12/00 G06F13/00 G06F13/28

    摘要: A backup client de-duplicates backup data sets using a locally stored, memory resonant, root tag vector and hash cache. To create a new backup data set, the client queries a backup server to determine which of the root hashes in the root tag vector are available on the backup server. If one or more are no longer available, the backup server re-uses a root tag vector entry corresponding to one of the no longer available root hashes. If all are available, the client ages out a root hash for re-use based on a combination of age and represented size. Data is de-duplicated by chunking and hashing it and comparing the resulting hashes to hashes in the hash cache. To prevent the hash cache from growing too large, entries in the hash cache are aged out based on a combination of age and size of data represented by the entries.

    摘要翻译: 备份客户端使用本地存储的内存共享,根标记向量和散列缓存来复制备份数据集。 要创建新的备份数据集,客户机将查询备份服务器,以确定根标记向量中的哪个根散列在备份服务器上可用。 如果一个或多个不再可用,则备份服务器将重新使用与不可用的根哈希值之一相对应的根标签向量条目。 如果所有的都可用,客户端将根据年龄和代表的大小的组合,老化根重新散列以重新使用。 数据通过分块并将其哈希解析,并将生成的哈希与哈希缓存中的哈希进行比较。 为了防止哈希缓存增长太大,哈希缓存中的条目基于由条目表示的数据的年龄和大小的组合而老化。