Abstract:
A method and system for communicating with IoT devices to gather information related to device failure or error(s) is disclosed. The system receives log files from an IoT device (e.g., a smart refrigerator) that recently failed. The system determines which log files the IoT device created before and/or after a failure. After gathering this information, the system stores the information in a database, sends it to the IoT device manufacturer, or sends it to a cloud provider. The system can also send the failure-related information to the IoT device-related entities (e.g., IoT device manufacturers), and the entity uses this information to troubleshoot the failure and send a fix or software update to the IoT device.
Abstract:
Recovery points can be used for replicating a virtual machine and reverting the virtual machine to a different state. A filter driver can monitor and capture input/output commands between a virtual machine and a virtual machine disk. The captured input/output commands can be used to create a recovery point. The recovery point can be associated with a bitmap that may be used to identify data blocks that have been modified between two versions of the virtual machine. Using this bitmap, a virtual machine may be reverted or restored to a different state by replacing modified data blocks and without replacing the entire virtual machine disk.
Abstract:
Systems and methods can implement one or more intelligent caching algorithms that reduce wear on the SSD and/or to improve caching performance. Such algorithms can improve storage utilization and I/O efficiency by taking into account the write-wearing limitations of the SSD. Accordingly, the systems and methods can cache to the SSD while avoiding writing too frequently to the SSD to increase or attempt to increase the lifespan of the SSD. The systems and methods may, for instance, write data to the SSD once that data has been read from the hard disk or memory multiple times to avoid or attempt to avoid writing data that has been read only once. The systems and methods may also write large chunks of data to the SSD at once instead of a single unit of data at a time. Further, the systems and methods can write to the SSD in a circular fashion.
Abstract:
Software, firmware, and systems are described herein that migrate functionality of a source physical computing device to a destination physical computing device. A non-production copy of data associated with a source physical computing device is created. A configuration of the source physical computing device is determined. A configuration for a destination physical computing device is determined based at least in part on the configuration of the source physical computing device. The destination physical computing device is provided access to data and metadata associated with the source physical computing device using the non-production copy of data associated with the source physical computing device.
Abstract:
A data storage system includes a generic snapshot interface, allowing for integration with a wide variety of snapshot-capable storage devices. The generic interface can be a programming interface (e.g., an application programming interface [API]). Using the snapshot interface, storage device vendors can integrate their particular snapshot technology with the data storage system. For instance, the data storage system can access a shared library of functions (e.g., a dynamically linked library [DLL]) provided by the vendor (or another by appropriate entity) and that complies with the specifications of the common programming interface. And by invoking the appropriate functions in the library, the data storage system implements the snapshot operation on the storage device.
Abstract:
A system according to certain aspects improves the process of performing snapshot replication operations (e.g., maintaining a mirror copy of primary data at a secondary location by generating snapshots of the primary data). The system can collect and maintain cumulative block-level changes to the primary data after each sub-interval of a plurality of sub-intervals between the snapshots. When a snapshot is generated, any changes to the primary data not reflected in the cumulative block-level changes are identified based on the snapshot and transmitted to the secondary location along with the cumulative block-level changes. By the time the snapshot is generated, some or all of the changes to the primary data associated with the given snapshot have already been included in the cumulative block-level changes, thereby reducing the time and computing resources spent to identify and collect the changes for transmission to the secondary location.
Abstract:
According to certain aspects, a method of creating customized bootable images for client computing devices in an information management system can include: creating a backup copy of each of a plurality of client computing devices, including a first client computing device; subsequent to receiving a request to restore the first client computing device to the state at a first time, creating a customized bootable image that is configured to directly restore the first client computing device to the state at the first time, wherein the customized bootable image includes system state specific to the first client computing device at the first time and one or more drivers associated with hardware existing at time of restore on a computing device to be rebooted; and rebooting the computing device to the state of the first client computing device at the first time from the customized bootable image.
Abstract:
A data storage system protects virtual machines using block-level backup operations and restores the data at a file level. The system accesses the virtual machine file information from the file allocation table of the host system underlying the virtualization layer. A file index associates this virtual machine file information with the related protected blocks in a secondary storage device during the block-level backup. Using the file index, the system can identify the specific blocks in the secondary storage device associated with a selected restore file. As a result, file level granularity for restore operations is possible for virtual machine data protected by block-level backup operations without restoring more than the selected file blocks from the block-level backup data.
Abstract:
Because Kubernetes clusters can be ephemeral, backing up in-cluster data to storage outside the cluster is important. Prior art solutions used the cluster's API server, which facilitates communications with the cluster control plane, to transfer backup data through the API server. However, the API server as a data transfer node has resiliency weaknesses and can slow down backup job performance. The present solution provides a more streamlined and scalable approach, which circumvents the API server and additionally includes more robust error checking, log capture, and realtime job monitoring to provide improved data protection resilience. The disclosed approach employs a “sponsor” data agent outside the cluster and temporarily deploys a specialized backup resource within the cluster during a backup job, such as a lightweight Kubernetes File Client and/or an enhanced File System Data Agent, both of which present substantial performance and resiliency advantages over the API server.
Abstract:
Because Kubernetes clusters can be ephemeral, backing up in-cluster data to storage outside the cluster is important. Prior art solutions used the cluster's API server, which facilitates communications with the cluster control plane, to transfer backup data through the API server. However, the API server as a data transfer node has resiliency weaknesses and can slow down backup job performance. The present solution provides a more streamlined and scalable approach, which circumvents the API server and additionally includes more robust error checking, log capture, and realtime job monitoring to provide improved data protection resilience. The disclosed approach employs a “sponsor” data agent outside the cluster and temporarily deploys a specialized backup resource within the cluster during a backup job, such as an enhanced File System Data Agent and/or a lightweight Kubernetes File Client, both of which present substantial performance and resiliency advantages over the API server.