DATA CONSOLIDATION USING A COMMON PORTION ACCESSIBLE BY MULTIPLE DEVICES
    1.
    发明申请
    DATA CONSOLIDATION USING A COMMON PORTION ACCESSIBLE BY MULTIPLE DEVICES 有权
    使用可由多个设备访问的公共部分的数据合并

    公开(公告)号:US20140304468A1

    公开(公告)日:2014-10-09

    申请号:US14199492

    申请日:2014-03-06

    IPC分类号: G06F12/08

    摘要: Multiple devices are provided access to a common, single instance of data and may use it without consuming resources beyond what would be required if only one device were using that data in a traditional configuration. In order to retain the device-specific differences, they are kept separate, but their relationship to the common data is maintained. All of this is done in a fashion that allows a given device to perceive and use its data as though it was its own separately accessible data.

    摘要翻译: 提供多个设备访问公共的单个数据实例,并且如果只有一个设备在传统配置中使用该数据,则可以使用它而不消耗超出所需的资源的资源。 为了保留特定于设备的差异,它们保持分开,但它们与公共数据的关系保持不变。 所有这一切都是以允许给定设备感知和使用其数据的方式完成的,就像它是自己可以独立访问的数据一样。

    Methods and apparatus for command list processing in performing parallel IO operations

    公开(公告)号:US10599477B1

    公开(公告)日:2020-03-24

    申请号:US16395638

    申请日:2019-04-26

    IPC分类号: G06F9/48 G06F3/06 G06F9/50

    摘要: Command list processing in performing parallel IO operations is disclosed. In one example, handling IO requests directed to an operating system having an IO scheduling component entails allocating a command to a thread in association with an IO request. The command is allocated from one of a plurality of command lists accessible in parallel, and the command is also linked to one of a plurality of active command lists that are accessible in parallel. The command lists can be arranged as per-CPU command lists, with each per-CPU command list corresponding to one of a plurality of CPUs on a multi-core processing platform on which the IO requests are processed. Similarly, each of the active command lists can respectively correspond to one of the plurality of CPUs on the multi-core processing platform. Per-volume queues can also be implemented for respective volumes presented to applications.

    Methods and apparatus for data request scheduling in performing parallel IO operations

    公开(公告)号:US10409640B1

    公开(公告)日:2019-09-10

    申请号:US16003277

    申请日:2018-06-08

    摘要: Methods and apparatus for data request scheduling in performing parallel IO operations are disclosed. In one example, IO requests directed to an operating system having an IO scheduling component are processed. There, an IO request directed from an application to the operating system is intercepted. A determination is made whether the IO request is subject to immediate processing using available parallel processing resources. When it is determined that the IO request is subject to immediate processing using the available parallel processing resources, the IO scheduling component of the operating system is bypassed. The IO request is directly and immediately processed and passed back to the application using the available parallel processing resources.

    Methods and apparatus for LRU buffer management in performing parallel IO operations

    公开(公告)号:US10740028B1

    公开(公告)日:2020-08-11

    申请号:US15690807

    申请日:2017-08-30

    IPC分类号: G06F3/06

    摘要: An LRU buffer configuration for performing parallel IO operations is disclosed. In one example, the LRU buffer configuration is a doubly linked list of segments. Each segment is also a doubly linked list of buffers. The LRU buffer configuration includes a head portion and a tail portion, each including several slots (pointers to segments) respectively accessible in parallel by a number of CPUs in a multicore platform. Thus, for example, a free buffer may be obtained for a calling application on a given CPU by selecting a head slot corresponding to the given CPU, identifying the segment pointed to by the selected head slot, locking that segment, and removing the buffer from the list of buffers in that segment. Buffers may similarly be returned according to slots and corresponding segments and buffers at the tail portion.

    Methods and apparatus for command list processing in performing parallel IO operations

    公开(公告)号:US10318354B1

    公开(公告)日:2019-06-11

    申请号:US15601319

    申请日:2017-05-22

    IPC分类号: G06F9/48 G06F3/06 G06F9/50

    摘要: Command list processing in performing parallel IO operations is disclosed. In one example, handling IO requests directed to an operating system having an IO scheduling component entails allocating a command to a thread in association with an IO request. The command is allocated from one of a plurality of command lists accessible in parallel, and the command is also linked to one of a plurality of active command lists that are accessible in parallel. The command lists can be arranged as per-CPU command lists, with each per-CPU command list corresponding to one of a plurality of CPUs on a multi-core processing platform on which the IO requests are processed. Similarly, each of the active command lists can respectively correspond to one of the plurality of CPUs on the multi-core processing platform. Per-volume queues can also be implemented for respective volumes presented to applications.

    Method, computer program product and apparatus for accelerating responses to requests for transactions involving data operations
    8.
    发明授权
    Method, computer program product and apparatus for accelerating responses to requests for transactions involving data operations 有权
    用于加速对涉及数据操作的交易请求的响应的方法,计算机程序产品和装置

    公开(公告)号:US09411518B2

    公开(公告)日:2016-08-09

    申请号:US14513840

    申请日:2014-10-14

    摘要: Responding to IO requests made by an application to an operating system within a computing device implements IO performance acceleration that interfaces with the logical and physical disk management components of the operating system and within that pathway provides a system memory based disk block cache. The logical disk management component of the operating system identifies logical disk addresses for IO requests sent from the application to the operating system. These addresses are translated to physical disk addresses that correspond to disk blocks available on a physical storage resource. The disk block cache stores cached disk blocks that correspond to the disk blocks available on the physical storage resource, such that IO requests may be fulfilled from the disk block cache. Provision of the disk block cache between the logical and physical disk management components accommodates tailoring of efficiency to any applications making IO requests, and flexible interaction with various different physical disks.

    摘要翻译: 响应应用程序对计算设备中的操作系统所做的IO请求,实现与操作系统的逻辑和物理磁盘管理组件接口的IO性能加速,并且在该通路内提供基于系统内存的磁盘块高速缓存。 操作系统的逻辑磁盘管理组件识别从应用程序发送到操作系统的IO请求的逻辑磁盘地址。 这些地址被转换为与物理存储资源上可用的磁盘块相对应的物理磁盘地址。 磁盘块缓存存储与物理存储资源上可用的磁盘块相对应的高速缓存的磁盘块,从而可以从磁盘块高速缓存中实现IO请求。 在逻辑磁盘管理组件和物理磁盘管理组件之间提供磁盘缓存可以满足任何应用程序产生IO请求的效率,以及与各种不同物理磁盘的灵活交互。

    Containerized storage stream microservice

    公开(公告)号:US11029855B1

    公开(公告)日:2021-06-08

    申请号:US16589500

    申请日:2019-10-01

    IPC分类号: G06F3/06 G06F9/50

    摘要: A containerized stream microservice is described. The containerized stream microservice is configured to provide the functionality of volume presentation along with all related interactions including the receipt and processing of IO requests and related services. The containerized stream microservice preferably implements stream metadata in the management of storage operations, and interacts with a store to provide underlying data storage. The store, which may also be referred to as a data store, is where underlying data is stored in a persistent manner. In one example, the store is an object store.

    METHOD, COMPUTER PROGRAM PRODUCT AND APPARATUS FOR ACCELERATING RESPONSES TO REQUESTS FOR TRANSACTIONS INVOLVING DATA OPERATIONS
    10.
    发明申请
    METHOD, COMPUTER PROGRAM PRODUCT AND APPARATUS FOR ACCELERATING RESPONSES TO REQUESTS FOR TRANSACTIONS INVOLVING DATA OPERATIONS 有权
    方法,计算机程序产品和装置用于加速涉及数据操作的交易的要求

    公开(公告)号:US20150186050A1

    公开(公告)日:2015-07-02

    申请号:US14513840

    申请日:2014-10-14

    摘要: Responding to IO requests made by an application to an operating system within a computing device implements IO performance acceleration that interfaces with the logical and physical disk management components of the operating system and within that pathway provides a system memory based disk block cache. The logical disk management component of the operating system identifies logical disk addresses for IO requests sent from the application to the operating system. These addresses are translated to physical disk addresses that correspond to disk blocks available on a physical storage resource. The disk block cache stores cached disk blocks that correspond to the disk blocks available on the physical storage resource, such that IO requests may be fulfilled from the disk block cache. Provision of the disk block cache between the logical and physical disk management components accommodates tailoring of efficiency to any applications making IO requests, and flexible interaction with various different physical disks.

    摘要翻译: 响应应用程序对计算设备中的操作系统所做的IO请求,实现与操作系统的逻辑和物理磁盘管理组件接口的IO性能加速,并且在该通路内提供基于系统内存的磁盘块高速缓存。 操作系统的逻辑磁盘管理组件识别从应用程序发送到操作系统的IO请求的逻辑磁盘地址。 这些地址被转换为与物理存储资源上可用的磁盘块相对应的物理磁盘地址。 磁盘块缓存存储与物理存储资源上可用的磁盘块相对应的高速缓存的磁盘块,从而可以从磁盘块高速缓存中实现IO请求。 在逻辑磁盘管理组件和物理磁盘管理组件之间提供磁盘缓存可以满足任何应用程序产生IO请求的效率,以及与各种不同物理磁盘的灵活交互。