Abstract:
Relevance optimized representative content associated with a data storage system is disclosed. One example is a system including a data summarization module, a clustering module, and a representative content selection module. The data summarization module associates, via a processor, each data object in a storage system with a derived data object. The clustering module determines clusters of similar data objects based on a similarity between associated derived data objects, and selects a representative data object for each determined cluster. The representative content selection module selects representative content associated with the storage system, where the representative content is based on the data objects, the derived data objects, and the representative data objects, and relevance optimizes of the selected representative content to an analytics application.
Abstract:
Example implementations may relate to a version controller allocating a copy page in persistent memory upon receiving, from an application executing on a processor, a copy command to version an image page for an atomic transaction. The version controller may receive application data addressed to a cache line of the image page, and may write the application data to a cache line of the copy page corresponding to the addressed cache line of the image page. If the version controller receives a replace-type transaction commit command, the version controller may generate a final page by either forward merging the image page into the copy page or backward merging the copy page into the image page, depending a merge direction policy.
Abstract:
Examples disclosed herein relate to re-execution of an analytical process based on lineage metadata. In an example, a determination may be made on a hub device that an analytical process previously executed on a remote edge device is to be re-executed on the hub device, wherein the analytical process is part of an analytical workflow that is implemented at least in part on the hub device and the remote edge device. In response to the determination, a storage location of input data for re-executing the analytical process may be identified based on lineage metadata stored on the hub device, and input data may be acquired from the storage location.
Abstract:
Various examples described herein provide for caching a page on persistent memory for memory-mapped access of a file from a non-persistent memory file system or a remote file system having a non-persistent memory page cache. In particular, some examples detect memory-mapped access of a file from a non-persistent memory file system or a remote file system having a non-persistent memory page cache and, based on availability of persistent memory, caches a page associated with the memory-mapped access on the persistent memory.
Abstract:
Example implementations relate to replicating data using remote directory memory access (RDMA). In example implementations, addresses may be registered in response to a map command. Data may be replicated using an RDMA.
Abstract:
Examples include bypassing a portion of an analytics workflow. In some examples, execution of an analytics workflow may be monitored upon receipt of a raw data and the execution may be interrupted at an optimal bypass stage to obtain insights data from the raw data. A similarity analysis may be performed to compare the insights data to a stored insights data in an insights data repository. Based, at least in part, on a determination of similarity, a bypass operation may be performed to bypass a remainder of the analytics workflow.
Abstract:
Examples disclosed herein relate to a storage appliance using an optimistic allocation of storage space. In an example system, a number of storage drives are coupled to a storage controller and an RNIC (remote direct memory access (RDMA) network interface card (NIC)) through a storage network. The RNIC includes a layout template selector and a number of templates, wherein the layout template selector selects a template based, at least in part, on a logical block address (LBA) received from a host. The template identifies each of the plurality of storage drives associated with portions of data represented by the LBA.
Abstract:
Example implementations relate to defining a first placement plan to place virtual storage appliance virtual machines on servers and defining a second placement plan to place an application virtual machine on the servers. The first placement plan can place each virtual storage appliance virtual machine on a server that is connected to a storage asset used by a respective VSA virtual machine.
Abstract:
Examples include bypassing a portion of an analytics workflow. In some examples, execution of an analytics workflow may be monitored upon receipt of a raw data and the execution may be interrupted at an optimal bypass stage to obtain insights data from the raw data. A similarity analysis may be performed to compare the insights data to a stored insights data in an insights data repository. Based, at least in part, on a determination of similarity, a bypass operation may be performed to bypass a remainder of the analytics workflow.
Abstract:
A security framework for a multi-tenant, multi-tier computer system with embedded processing is described. A multi-tenant security framework is created for a combined processing and storage hierarchy of multiple tiers. The multi-tenant security framework is applied to multiple execution levels of the memory device. The multi-tenant security framework is applied to multiple layers of application server software of the memory device. The multi-tenant security framework is also applied to multiple layers of storage server software of the memory device.