摘要:
Multiple devices are provided access to a common, single instance of data and may use it without consuming resources beyond what would be required if only one device were using that data in a traditional configuration. In order to retain the device-specific differences, they are kept separate, but their relationship to the common data is maintained. All of this is done in a fashion that allows a given device to perceive and use its data as though it was its own separately accessible data.
摘要:
Methods and apparatus for data request scheduling in performing parallel IO operations are disclosed. In one example, IO requests directed to an operating system having an IO scheduling component are processed. There, an IO request directed from an application to the operating system is intercepted. A determination is made whether the IO request is subject to immediate processing using available parallel processing resources. When it is determined that the IO request is subject to immediate processing using the available parallel processing resources, the IO scheduling component of the operating system is bypassed. The IO request is directly and immediately processed and passed back to the application using the available parallel processing resources.
摘要:
A node based architecture that supports arbitrary access to any node in a system for data representation and access, while still providing virtual volume coherency that is global to all of the nodes in the system, and while providing underlying data management services that are also variously accessible from any node in the system.
摘要:
Command list processing in performing parallel IO operations is disclosed. In one example, handling IO requests directed to an operating system having an IO scheduling component entails allocating a command to a thread in association with an IO request. The command is allocated from one of a plurality of command lists accessible in parallel, and the command is also linked to one of a plurality of active command lists that are accessible in parallel. The command lists can be arranged as per-CPU command lists, with each per-CPU command list corresponding to one of a plurality of CPUs on a multi-core processing platform on which the IO requests are processed. Similarly, each of the active command lists can respectively correspond to one of the plurality of CPUs on the multi-core processing platform. Per-volume queues can also be implemented for respective volumes presented to applications.
摘要:
Methods and apparatus for data request scheduling in performing parallel IO operations are disclosed. In one example, IO requests directed to an operating system having an IO scheduling component are processed. There, an IO request directed from an application to the operating system is intercepted. A determination is made whether the IO request is subject to immediate processing using available parallel processing resources. When it is determined that the IO request is subject to immediate processing using the available parallel processing resources, the IO scheduling component of the operating system is bypassed. The IO request is directly and immediately processed and passed back to the application using the available parallel processing resources.
摘要:
An LRU buffer configuration for performing parallel IO operations is disclosed. In one example, the LRU buffer configuration is a doubly linked list of segments. Each segment is also a doubly linked list of buffers. The LRU buffer configuration includes a head portion and a tail portion, each including several slots (pointers to segments) respectively accessible in parallel by a number of CPUs in a multicore platform. Thus, for example, a free buffer may be obtained for a calling application on a given CPU by selecting a head slot corresponding to the given CPU, identifying the segment pointed to by the selected head slot, locking that segment, and removing the buffer from the list of buffers in that segment. Buffers may similarly be returned according to slots and corresponding segments and buffers at the tail portion.
摘要:
Command list processing in performing parallel IO operations is disclosed. In one example, handling IO requests directed to an operating system having an IO scheduling component entails allocating a command to a thread in association with an IO request. The command is allocated from one of a plurality of command lists accessible in parallel, and the command is also linked to one of a plurality of active command lists that are accessible in parallel. The command lists can be arranged as per-CPU command lists, with each per-CPU command list corresponding to one of a plurality of CPUs on a multi-core processing platform on which the IO requests are processed. Similarly, each of the active command lists can respectively correspond to one of the plurality of CPUs on the multi-core processing platform. Per-volume queues can also be implemented for respective volumes presented to applications.
摘要:
Responding to IO requests made by an application to an operating system within a computing device implements IO performance acceleration that interfaces with the logical and physical disk management components of the operating system and within that pathway provides a system memory based disk block cache. The logical disk management component of the operating system identifies logical disk addresses for IO requests sent from the application to the operating system. These addresses are translated to physical disk addresses that correspond to disk blocks available on a physical storage resource. The disk block cache stores cached disk blocks that correspond to the disk blocks available on the physical storage resource, such that IO requests may be fulfilled from the disk block cache. Provision of the disk block cache between the logical and physical disk management components accommodates tailoring of efficiency to any applications making IO requests, and flexible interaction with various different physical disks.
摘要:
A containerized stream microservice is described. The containerized stream microservice is configured to provide the functionality of volume presentation along with all related interactions including the receipt and processing of IO requests and related services. The containerized stream microservice preferably implements stream metadata in the management of storage operations, and interacts with a store to provide underlying data storage. The store, which may also be referred to as a data store, is where underlying data is stored in a persistent manner. In one example, the store is an object store.
摘要:
Responding to IO requests made by an application to an operating system within a computing device implements IO performance acceleration that interfaces with the logical and physical disk management components of the operating system and within that pathway provides a system memory based disk block cache. The logical disk management component of the operating system identifies logical disk addresses for IO requests sent from the application to the operating system. These addresses are translated to physical disk addresses that correspond to disk blocks available on a physical storage resource. The disk block cache stores cached disk blocks that correspond to the disk blocks available on the physical storage resource, such that IO requests may be fulfilled from the disk block cache. Provision of the disk block cache between the logical and physical disk management components accommodates tailoring of efficiency to any applications making IO requests, and flexible interaction with various different physical disks.