Secondary cache for write accumulation and coalescing
    1.
    发明授权
    Secondary cache for write accumulation and coalescing 有权
    二级缓存用于写入累积和合并

    公开(公告)号:US08255627B2

    公开(公告)日:2012-08-28

    申请号:US12577164

    申请日:2009-10-10

    IPC分类号: G06F12/00

    摘要: A method for efficiently using a large secondary cache is disclosed herein. In certain embodiments, such a method may include accumulating, in a secondary cache, a plurality of data tracks. These data tracks may include modified data and/or unmodified data. The method may determine if a subset of the plurality of data tracks makes up a full stride. In the event the subset makes up a full stride, the method may destage the subset from the secondary cache. By destaging full strides, the method reduces the number of disk operations that are required to destage data from the secondary cache. A corresponding computer program product and apparatus are also disclosed and claimed herein.

    摘要翻译: 本文公开了一种高效地使用大型二级高速缓存的方法。 在某些实施例中,这种方法可以包括在二次高速缓存中累积多个数据轨道。 这些数据轨道可以包括经修改的数据和/或未修改的数据。 该方法可以确定多个数据轨道的一个子集是否构成一个完整的步幅。 在子集构成一个完整的步骤的情况下,该方法可能会从二级缓存中退出该子集。 通过降级整个步骤,该方法减少了从二级缓存中恢复数据所需的磁盘操作数。 本文还公开并要求相应的计算机程序产品和装置。

    Secondary cache for write accumulation and coalescing
    2.
    发明授权
    Secondary cache for write accumulation and coalescing 有权
    二级缓存用于写入累积和合并

    公开(公告)号:US08549225B2

    公开(公告)日:2013-10-01

    申请号:US13430613

    申请日:2012-03-26

    IPC分类号: G06F12/00

    摘要: A method for efficiently using a large secondary cache is disclosed herein. In certain embodiments, such a method may include accumulating, in a secondary cache, a plurality of data tracks. These data tracks may include modified data and/or unmodified data. The method may determine if a subset of the plurality of data tracks makes up a full stride. In the event the subset makes up a full stride, the method may destage the subset from the secondary cache. By destaging full strides, the method reduces the number of disk operations that are required to destage data from the secondary cache. A corresponding computer program product and apparatus are also disclosed herein.

    摘要翻译: 本文公开了一种高效地使用大型二级高速缓存的方法。 在某些实施例中,这种方法可以包括在二次高速缓存中累积多个数据轨道。 这些数据轨道可以包括经修改的数据和/或未修改的数据。 该方法可以确定多个数据轨道的一个子集是否构成一个完整的步幅。 在子集构成一个完整的步骤的情况下,该方法可能会从二级缓存中退出该子集。 通过降级整个步骤,该方法减少了从二级缓存中恢复数据所需的磁盘操作数。 本文还公开了相应的计算机程序产品和装置。

    Coordination of multiprocessor operations with shared resources
    3.
    发明授权
    Coordination of multiprocessor operations with shared resources 失效
    多处理器操作与共享资源协调

    公开(公告)号:US07650467B2

    公开(公告)日:2010-01-19

    申请号:US12052569

    申请日:2008-03-20

    IPC分类号: G06F13/00

    CPC分类号: G06F12/0831

    摘要: In managing multiprocessor operations, a first processor repetitively reads a cache line wherein the cache line is cached from a line of a shared memory of resources shared by both the first processor and a second processor. Coherency is maintained between the shared memory line and the cache line in accordance with a cache coherency protocol. In one aspect, the repetitive cache line reading occupies the first processor and inhibits the first processor from accessing the shared resources. In another aspect, upon completion of operations by the second processor involving the shared resources, the second processor writes data to the shared memory line to signal to the first processor that the shared resources may be accessed by the first processor. In response, the first processor changes the state of the cache line in accordance with the cache coherency protocol and reads the data written by the second processor. Other embodiments are described and claimed.

    摘要翻译: 在管理多处理器操作时,第一处理器重复地读取高速缓存行,其中高速缓存行从由第一处理器和第二处理器共享的资源的共享存储器的一行缓存。 根据高速缓存一致性协议,在共享存储器线和高速缓存线之间保持一致性。 在一个方面,重复的高速缓存行读取占用第一处理器并且禁止第一处理器访问共享资源。 在另一方面,在由涉及共享资源的第二处理器完成操作之后,第二处理器将数据写入共享存储器线,以向第一处理器通知第一处理器可以访问共享资源。 作为响应,第一处理器根据高速缓存一致性协议改变高速缓存行的状态,并读取由第二处理器写入的数据。 描述和要求保护其他实施例。

    Managing multiprocessor operations
    4.
    发明授权
    Managing multiprocessor operations 失效
    管理多处理器操作

    公开(公告)号:US07418557B2

    公开(公告)日:2008-08-26

    申请号:US11001476

    申请日:2004-11-30

    IPC分类号: G06F13/00

    CPC分类号: G06F12/0831

    摘要: In managing multiprocessor operations, a first processor repetitively reads a cache line wherein the cache line is cached from a line of a shared memory of resources shared by both the first processor and a second processor. Coherency is maintained between the shared memory line and the cache line in accordance with a cache coherency protocol. In one aspect, the repetitive cache line reading occupies the first processor and inhibits the first processor from accessing the shared resources. In another aspect, upon completion of operations by the second processor involving the shared resources, the second processor writes data to the shared memory line to signal to the first processor that the shared resources may be accessed by the first processor. In response, the first processor changes the state of the cache line in accordance with the cache coherency protocol and reads the data written by the second processor. Other embodiments are described and claimed.

    摘要翻译: 在管理多处理器操作时,第一处理器重复地读取高速缓存行,其中高速缓存行从由第一处理器和第二处理器共享的资源的共享存储器的一行缓存。 根据高速缓存一致性协议,在共享存储器线和高速缓存线之间保持一致性。 在一个方面,重复的高速缓存行读取占用第一处理器并且禁止第一处理器访问共享资源。 在另一方面,在由涉及共享资源的第二处理器完成操作之后,第二处理器将数据写入共享存储器线,以向第一处理器通知第一处理器可以访问共享资源。 作为响应,第一处理器根据高速缓存一致性协议改变高速缓存行的状态,并读取由第二处理器写入的数据。 描述和要求保护其他实施例。

    COORDINATION OF MULTIPROCESSOR OPERATIONS WITH SHARED RESOURCES
    5.
    发明申请
    COORDINATION OF MULTIPROCESSOR OPERATIONS WITH SHARED RESOURCES 失效
    使用共享资源协调多个运营商的运营

    公开(公告)号:US20080168238A1

    公开(公告)日:2008-07-10

    申请号:US12052569

    申请日:2008-03-20

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0831

    摘要: In managing multiprocessor operations, a first processor repetitively reads a cache line wherein the cache line is cached from a line of a shared memory of resources shared by both the first processor and a second processor. Coherency is maintained between the shared memory line and the cache line in accordance with a cache coherency protocol. In one aspect, the repetitive cache line reading occupies the first processor and inhibits the first processor from accessing the shared resources. In another aspect, upon completion of operations by the second processor involving the shared resources, the second processor writes data to the shared memory line to signal to the first processor that the shared resources may be accessed by the first processor. In response, the first processor changes the state of the cache line in accordance with the cache coherency protocol and reads the data written by the second processor. Other embodiments are described and claimed.

    摘要翻译: 在管理多处理器操作时,第一处理器重复地读取高速缓存行,其中高速缓存行从由第一处理器和第二处理器共享的资源的共享存储器的一行缓存。 根据高速缓存一致性协议,在共享存储器线和高速缓存线之间保持一致性。 在一个方面,重复的高速缓存行读取占用第一处理器并且禁止第一处理器访问共享资源。 在另一方面,在由涉及共享资源的第二处理器完成操作之后,第二处理器将数据写入共享存储器线,以向第一处理器通知第一处理器可以访问共享资源。 作为响应,第一处理器根据高速缓存一致性协议改变高速缓存行的状态,并读取由第二处理器写入的数据。 描述和要求保护其他实施例。

    DATA ARCHIVING USING DATA COMPRESSION OF A FLASH COPY
    6.
    发明申请
    DATA ARCHIVING USING DATA COMPRESSION OF A FLASH COPY 有权
    使用闪存拷贝数据压缩的数据存档

    公开(公告)号:US20120131293A1

    公开(公告)日:2012-05-24

    申请号:US12950992

    申请日:2010-11-19

    IPC分类号: G06F12/16 G06F12/00

    摘要: Embodiments of the disclosure relate to archiving data in a storage system. An exemplary embodiment comprises making a flash copy of data in a source volume, compressing data in the flash copy wherein each track of data is compressed into a set of data pages, and storing the compressed data pages in a target volume. Data extents for the target volume may be allocated from a pool of compressed data extents. After each stride worth of data is compressed and stored in the target volume, data may be destaged to avoid destage penalties. Data from the target volume may be decompressed from a flash copy of the target volume in a reverse process to restore each data track, when the archived data is needed. Data may be compressed and uncompressed using a Lempel-Ziv-Welch process.

    摘要翻译: 本公开的实施例涉及在存储系统中归档数据。 示例性实施例包括在源卷中进行数据的闪速复制,压缩闪存中的数据,其中数据的每个轨道被压缩成一组数据页,并将压缩的数据页存储在目标卷中。 可以从压缩数据盘区池中分配目标卷的数据盘区。 在每一步数据的数据被压缩并存储在目标卷中之后,数据可能会被排除以避免流失的惩罚。 当需要归档数据时,目标卷的数据可以以相反的过程从目标卷的闪存副本中解压缩,以恢复每个数据轨道。 数据可以使用Lempel-Ziv-Welch进程进行压缩和解压缩。

    Differential caching mechanism based on media I/O speed
    7.
    发明授权
    Differential caching mechanism based on media I/O speed 有权
    基于媒体I / O速度的差分缓存机制

    公开(公告)号:US08095738B2

    公开(公告)日:2012-01-10

    申请号:US12484963

    申请日:2009-06-15

    IPC分类号: G06F12/00

    摘要: A method for allocating space in a cache based on media I/O speed is disclosed herein. In certain embodiments, such a method may include storing, in a read cache, cache entries associated with faster-responding storage devices and cache entries associated with slower-responding storage devices. The method may further include implementing an eviction policy in the read cache. This eviction policy may include demoting, from the read cache, the cache entries of faster-responding storage devices faster than the cache entries of slower-responding storage devices, all other variables being equal. In certain embodiments, the eviction policy may further include demoting, from the read cache, cache entries having a lower read-hit ratio faster than cache entries having a higher read-hit ratio, all other variables being equal. A corresponding computer program product and apparatus are also disclosed and claimed herein.

    摘要翻译: 本文公开了一种基于媒体I / O速度在高速缓存中分配空间的方法。 在某些实施例中,这种方法可以包括在读取缓存中存储与更快响应的存储设备相关联的高速缓存条目以及与较慢响应的存储设备相关联的高速缓存条目。 该方法还可以包括在读取高速缓存中实现逐出策略。 这种驱逐策略可以包括从读取的缓存降低响应较快的存储设备的高速缓存条目比缓慢响应的存储设备的缓存条目更快,所有其他变量相等。 在某些实施例中,驱逐策略还可以包括从读取的缓存降级具有比具有较高读取命中率的高速缓存条目更低的读命中率的高速缓存条目,所有其他变量相等。 本文还公开并要求相应的计算机程序产品和装置。