METHOD FOR PROTECTING A GPT CACHED DISKS DATA INTEGRITY IN AN EXTERNAL OPERATING SYSTEM ENVIRONMENT
    1.
    发明申请
    METHOD FOR PROTECTING A GPT CACHED DISKS DATA INTEGRITY IN AN EXTERNAL OPERATING SYSTEM ENVIRONMENT 审中-公开
    在外部操作系统环境中保护GPT高速缓存数据完整性的方法

    公开(公告)号:US20140059293A1

    公开(公告)日:2014-02-27

    申请号:US13967219

    申请日:2013-08-14

    Inventor: Pradeep Bisht

    Abstract: An invention is provided for protecting the data integrity of a cached storage device in an alternate operating system (OS) environment. The invention includes replacing a globally unique identifiers partition table (GPT) for a cached disk with a modified globally unique identifiers partition table (MGPT). The MGPT renders cached partitions on the cached disk inaccessible when the MGPT is used by an OS to access the cached partitions, while un-cached partitions on the cached disk are still accessible when the using MGPT. In normal operation, the data on the cached disk is accessed using information based on the GPT, which can be stored on a caching disk, generally via caching software. In response to receiving a request to disable caching, the MGPT on the cached disk is replaced with the GPT, thus rendering the all data on the formally cached disk accessible in an alternate OS environment where appropriate caching software is not present.

    Abstract translation: 提供了一种用于保护备用操作系统(OS)环境中的高速缓存存储设备的数据完整性的发明。 本发明包括用修改的全局唯一标识符分区表(MGPT)替换用于高速缓存的磁盘的全局唯一标识符分区表(GPT)。 当MGPT由操作系统使用以访问缓存的分区时,MGPT呈现高速缓存的磁盘上的缓存分区,而使用MGPT时,缓存的磁盘上的未缓存分区仍然可以访问。 在正常操作中,使用基于GPT的信息可以访问高速缓存磁盘上的数据,GPT通常可以通过缓存软件存储在缓存磁盘上。 响应于接收到禁用高速缓存的请求,缓存磁盘上的MGPT将被GPT替换,从而使正式缓存的磁盘上的所有数据都可以在不存在适当的缓存软件的备用操作系统环境中访问。

    ELECTRONIC SYSTEM WITH STORAGE CONTROL MECHANISM AND METHOD OF OPERATION THEREOF
    2.
    发明申请
    ELECTRONIC SYSTEM WITH STORAGE CONTROL MECHANISM AND METHOD OF OPERATION THEREOF 审中-公开
    具有存储控制机制的电子系统及其操作方法

    公开(公告)号:US20160188528A1

    公开(公告)日:2016-06-30

    申请号:US14882056

    申请日:2015-10-13

    Abstract: An electronic system includes: a management server providing a management mechanism with an address structure having a unified address space; a communication block, coupled to the management server, configured to implement a communication transaction based on the management mechanism with the address structure having the unified address space; and a server, coupled to the communication block, providing the communication transaction with a storage device based on the management mechanism with the address structure having the unified address space.

    Abstract translation: 电子系统包括:管理服务器,其向管理机构提供具有统一地址空间的地址结构; 耦合到管理服务器的通信块,被配置为基于具有统一地址空间的地址结构的管理机制来实现通信事务; 以及耦合到通信块的服务器,基于具有统一地址空间的地址结构的管理机制来向存储设备提供通信事务。

    Dynamic cache allocation in a solid state drive environment
    3.
    发明授权
    Dynamic cache allocation in a solid state drive environment 有权
    在固态驱动器环境中动态缓存分配

    公开(公告)号:US09268699B2

    公开(公告)日:2016-02-23

    申请号:US13909045

    申请日:2013-06-03

    Inventor: Pradeep Bisht

    Abstract: An invention is provided for dynamic cache allocation in a solid state drive environment. The invention includes partitioning a cache memory into a reserved partition and a caching partition, wherein the reserved partition begins at a beginning of the cache memory and the caching partition begins after an end of the reserved partition. Data is cached starting at a beginning of the caching partition. Then, when the caching partition is fully utilized, data is cached the reserved partition. After receiving an indication of a power state change, such as when entering a sleep power state, marking data is written to the reserve partition. The marking data is examined after resuming the normal power state to determine whether a deep sleep power state was entered. When returning from a deep sleep power state, the beginning address of valid cache data within the reserve partition is determined after resuming a normal power state.

    Abstract translation: 提供了一种用于固态驱动环境中的动态高速缓存分配的发明。 本发明包括将高速缓存存储器划分为预留分区和高速缓存分区,其中保留分区从缓存存储器的开始开始,并且高速缓存分区在保留分区结束之后开始。 数据从缓存分区开始缓存。 然后,当缓存分区被充分利用时,数据被缓存为保留的分区。 在接收到功率状态改变的指示之后,例如当进入休眠功率状态时,将标记数据写入保留分区。 在恢复正常功率状态之后检查标记数据,以确定是否进入深度睡眠功率状态。 当从深度睡眠功率状态返回时,在恢复正常功率状态之后确定预留分区内的有效高速缓存数据的起始地址。

    Method for filtering cached input/output data based on data generation/consumption
    4.
    发明授权
    Method for filtering cached input/output data based on data generation/consumption 有权
    基于数据生成/消耗过滤缓存的输入/输出数据的方法

    公开(公告)号:US09026693B2

    公开(公告)日:2015-05-05

    申请号:US13959713

    申请日:2013-08-05

    CPC classification number: G06F13/124 G06F13/28 G06F13/385 H04L47/10 H04L47/30

    Abstract: An invention is provided for filtering cached input/output (I/O) data. The invention includes receiving a current I/O transfer. Embodiments of the present invention evaluate whether to filter ongoing data streams once the data stream reaches are particular size threshold. The current I/O transfer is part of an ongoing sequential data stream and the total data transferred as part of the ongoing sequential data stream is greater than the predetermined threshold. The transfer rate for the ongoing sequential data stream then is calculated and a determination is made as to whether the transfer rate is greater than a throughput associated with a target storage device. The current I/O transfer is cached when the transfer rate is greater than the throughput associated with a target storage device, or is not cached when the transfer rate is not greater than the throughput associated with a target storage device.

    Abstract translation: 提供了用于过滤缓存的输入/输出(I / O)数据的发明。 本发明包括接收当前的I / O传送。 一旦数据流达到特定尺寸阈值,本发明的实施例评估是否过滤正在进行的数据流。 当前的I / O传输是正在进行的顺序数据流的一部分,并且作为正在进行的顺序数据流的一部分传送的总数据大于预定阈值。 然后计算正在进行的顺序数据流的传送速率,并且确定传输速率是否大于与目标存储设备相关联的吞吐量。 当传输速率大于与目标存储设备相关联的吞吐量时,当前的I / O传输被缓存,或者当传输速率不大于与目标存储设备相关联的吞吐量时,不缓存当前I / O传输。

    METHOD FOR FILTERING CACHED INPUT/OUTPUT DATA BASED ON DATA GENERATION/CONSUMPTION
    5.
    发明申请
    METHOD FOR FILTERING CACHED INPUT/OUTPUT DATA BASED ON DATA GENERATION/CONSUMPTION 有权
    基于数据生成/消耗过滤高速缓存输入/输出数据的方法

    公开(公告)号:US20150039789A1

    公开(公告)日:2015-02-05

    申请号:US13959713

    申请日:2013-08-05

    CPC classification number: G06F13/124 G06F13/28 G06F13/385 H04L47/10 H04L47/30

    Abstract: An invention is provided for filtering cached input/output (I/O) data. The invention includes receiving a current I/O transfer. Embodiments of the present invention evaluate whether to filter ongoing data streams once the data stream reaches are particular size threshold. The current I/O transfer is part of an ongoing sequential data stream and the total data transferred as part of the ongoing sequential data stream is greater than the predetermined threshold. The transfer rate for the ongoing sequential data stream then is calculated and a determination is made as to whether the transfer rate is greater than a throughput associated with a target storage device. The current I/O transfer is cached when the transfer rate is greater than the throughput associated with a target storage device, or is not cached when the transfer rate is not greater than the throughput associated with a target storage device.

    Abstract translation: 提供了用于过滤缓存的输入/输出(I / O)数据的发明。 本发明包括接收当前的I / O传送。 一旦数据流达到特定尺寸阈值,本发明的实施例评估是否过滤正在进行的数据流。 当前的I / O传输是正在进行的顺序数据流的一部分,并且作为正在进行的顺序数据流的一部分传送的总数据大于预定阈值。 然后计算正在进行的顺序数据流的传送速率,并且确定传输速率是否大于与目标存储设备相关联的吞吐量。 当传输速率大于与目标存储设备相关联的吞吐量时,当前的I / O传输被缓存,或者当传输速率不大于与目标存储设备相关联的吞吐量时,不缓存当前I / O传输。

    Method for filtering cached input/output data based on data generation/consumption
    7.
    发明授权
    Method for filtering cached input/output data based on data generation/consumption 有权
    基于数据生成/消耗过滤缓存输入/输出数据的方法

    公开(公告)号:US09274996B2

    公开(公告)日:2016-03-01

    申请号:US14690365

    申请日:2015-04-17

    Abstract: According to one embodiment, filtering cached input/output (I/O) data includes receiving a current I/O transfer that is part of an ongoing data stream, and evaluating whether to filter ongoing data streams once the data stream reaches are particular size threshold. The transfer rate for the ongoing data stream may be calculated and a determination is made as to whether the transfer rate is greater than a throughput associated with a target storage device. The current I/O transfer is cached if the transfer rate is greater than the throughput associated with a target storage device, or is not cached if the transfer rate is not greater than the throughput associated with a target storage device. The current I/O transfer may be also cached if the transfer rate is less than or equal to the throughput associated with the target storage device and the I/O transfer is a write I/O transfer.

    Abstract translation: 根据一个实施例,过滤缓存的输入/输出(I / O)数据包括接收作为正在进行的数据流的一部分的当前I / O传输,以及一旦数据流到达时评估是否过滤正在进行的数据流是特定大小阈值 。 可以计算正在进行的数据流的传输速率,并且确定传输速率是否大于与目标存储设备相关联的吞吐量。 如果传输速率大于与目标存储设备相关联的吞吐量,则缓存当前的I / O传输,如果传输速率不大于与目标存储设备相关联的吞吐量,则不缓存当前的I / O传输。 如果传输速率小于或等于与目标存储设备相关联的吞吐量,并且I / O传输是写入I / O传输,则当前I / O传输也可以缓存。

    Method for disk defrag handling in solid state drive caching environment
    10.
    发明授权
    Method for disk defrag handling in solid state drive caching environment 有权
    在固态驱动器缓存环境中进行磁盘碎片整理处理的方法

    公开(公告)号:US09201799B2

    公开(公告)日:2015-12-01

    申请号:US13909027

    申请日:2013-06-03

    Abstract: An invention is provided for handling target disk access requests during disk defragmentation in a solid state drive caching environment. The invention includes detecting a request to access a target storage device. In response, data associated with the request is written to the target storage device without writing the data to the caching device, with the proviso that the request is a write request. In addition, the invention includes reading data associated with the request and marking the data associated with the request stored in the caching device for discard, with the proviso that the request is a read request and the data associated with the request is stored on the caching device. Data marked for discard is discarded from the caching device when time permits, for example, upon completion of disk defragmentation.

    Abstract translation: 提供了一种用于在固态驱动器缓存环境中的磁盘碎片整理期间处理目标磁盘访问请求的发明。 本发明包括检测访问目标存储设备的请求。 作为响应,与请求相关联的数据被写入目标存储设备,而不将数据写入高速缓存设备,条件是请求是写请求。 此外,本发明包括读取与请求相关联的数据,并且标记与存储在缓存设备中的请求相关联的数据以进行丢弃,条件是请求是读取请求,并且与请求相关联的数据被存储在缓存上 设备。 当时间允许时,例如在磁盘碎片整理完成时,标记为丢弃的数据从缓存设备中被丢弃。

Patent Agency Ranking