摘要:
A method and system for managing maintenance operations in a multi-bank non-volatile storage device is disclosed. The method includes receiving a data write command and associated data from a host system for storage in the non-volatile storage device and directing a head of the data write command to a first bank in the and a tail of the data write command to a second bank, where the head of the data write command only includes data having logical block addresses preceding logical block addresses of data in the tail of the data write command. When a status of the first bank delays execution of the data write command the controller executes a second bank maintenance procedure in the second bank while the data write command directed to the first and second banks is pending. The system includes a plurality of banks, where each bank may be associated with the same or different controllers, and the one or more controllers are adapted to execute the method noted above.
摘要:
A method and system for managing maintenance operations in a multi-bank non-volatile storage device is disclosed. The method includes receiving a data write command and associated data from a host system for storage in the non-volatile storage device and directing a head of the data write command to a first bank in the and a tail of the data write command to a second bank, where the head of the data write command only includes data having logical block addresses preceding logical block addresses of data in the tail of the data write command. When a status of the first bank delays execution of the data write command the controller executes a second bank maintenance procedure in the second bank while the data write command directed to the first and second banks is pending. The system includes a plurality of banks, where each bank may be associated with the same or different controllers, and the one or more controllers are adapted to execute the method noted above.
摘要:
A portion of a nonvolatile memory is partitioned from a main multi-level memory array to operate as a cache. The cache memory is configured to store at less capacity per memory cell and finer granularity of write units compared to the main memory. In a block-oriented memory architecture, the cache has multiple functions, not merely to improve access speed, but is an integral part of a sequential update block system. Decisions to archive data from the cache memory to the main memory depend on the attributes of the data to be archived, the state of the blocks in the main memory portion and the state of the blocks in the cache portion.
摘要:
A portion of a nonvolatile memory is partitioned from a main multi-level memory array to operate as a cache. The cache memory is configured to store at less capacity per memory cell and finer granularity of write units compared to the main memory. In a block-oriented memory architecture, the cache has multiple functions, not merely to improve access speed, but is an integral part of a sequential update block system. The cache memory has a capacity dynamically increased by allocation of blocks from the main memory in response to a demand to increase the capacity. Preferably, a block with an endurance count higher than average is allocated. The logical addresses of data are partitioned into zones to limit the size of the indices for the cache.
摘要:
A method and system maintains an address table for mapping logical groups to physical addresses in a memory device. The method includes receiving a request to set an entry in the address table and selecting and flushing entries in an address table cache depending on the existence of the entry in the cache and whether the cache meets a flushing threshold criteria. The flushed entries include less than the maximum capacity of the address table cache. The flushing threshold criteria includes whether the address table cache is full or if a page exceeds a threshold of changed entries. The address table and/or the address table cache may be stored in a non-volatile memory and/or a random access memory. Improved performance may result using this method and system due to the reduced number of write operations and time needed to partially flush the address table cache to the address table.
摘要:
Techniques for the management of spare blocks in re-programmable non-volatile memory system, such as a flash EEPROM system, are presented. In one set of techniques, for a memory partitioned into two sections (for example a binary section and a multi-state section), where blocks of one section are more prone to error, spare blocks can be transferred from the more error prone partition to the less error prone partition. In another set of techniques for a memory partitioned into two sections, blocks which fail in the more error prone partition are transferred to serve as spare blocks in the other partition. In a complementary set of techniques, a 1-bit time stamp is maintained for free blocks to determine whether the block has been written recently. Other techniques allow for spare blocks to be managed by way of a logical to physical conversion table by assigning them logical addresses that exceed the logical address space of which a host is aware.
摘要:
A method and system maintains an address table for mapping logical groups to physical addresses in a memory device. The method includes receiving a request to set an entry in the address table and selecting and flushing entries in an address table cache depending on the existence of the entry in the cache and whether the cache meets a flushing threshold criteria. The flushed entries include less than the maximum capacity of the address table cache. The flushing threshold criteria includes whether the address table cache is full or if a page exceeds a threshold of changed entries. The address table and/or the address table cache may be stored in a non-volatile memory and/or a random access memory. Improved performance may result using this method and system due to the reduced number of write operations and time needed to partially flush the address table cache to the address table.
摘要:
A portion of a nonvolatile memory is partitioned from a main multi-level memory array to operate as a cache. The cache memory is configured to store at less capacity per memory cell and finer granularity of write units compared to the main memory. In a block-oriented memory architecture, the cache has multiple functions, not merely to improve access speed, but is an integral part of a sequential update block system. The cache memory has a capacity dynamically increased by allocation of blocks from the main memory in response to a demand to increase the capacity. Preferably, a block with an endurance count higher than average is allocated. The logical addresses of data are partitioned into zones to limit the size of the indices for the cache.
摘要:
A portion of a nonvolatile memory is partitioned from a main multi-level memory array to operate as a cache. The cache memory is configured to store at less capacity per memory cell and finer granularity of write units compared to the main memory. In a block-oriented memory architecture, the cache has multiple functions, not merely to improve access speed, but is an integral part of a sequential update block system. Decisions to write data to the cache memory or directly to the main memory depend on the attributes and characteristics of the data to be written, the state of the blocks in the main memory portion and the state of the blocks in the cache portion.
摘要:
A portion of a nonvolatile memory is partitioned from a main multi-level memory array to operate as a cache. The cache memory is configured to store at less capacity per memory cell and finer granularity of write units compared to the main memory. In a block-oriented memory architecture, the cache has multiple functions, not merely to improve access speed, but is an integral part of a sequential update block system. Decisions to archive data from the cache memory to the main memory depend on the attributes of the data to be archived, the state of the blocks in the main memory portion and the state of the blocks in the cache portion.