Abstract:
A memory device includes a plurality of NAND flash chips, a dynamic random access memory (DRAM) portion in data communication with the NAND flash chips, and a controller. Each NAND flash chip has a first storage capacity, and includes a memory section, each memory section including a plurality of pages. The DRAM portion has a second storage capacity that is at least as large as the first storage capacity. The controller is configured to select one of the NAND flash chips as a currently selected NAND flash chip for writing data, copy all valid pages in the currently selected NAND flash chip into the DRAM portion, and, in response to a write request to a logical memory location mapped to a particular physical location in one of the NAND flash chips, allocate the currently selected NAND flash chip for writing to a particular page that includes the particular physical location.
Abstract:
IOMMU map-in may be overlapped with second tier memory access, such that the two operations are at least partially performed at the same time. For example, when a second tier memory read into a storage device controller internal buffer is initiated, an IOMMU mapping may be built simultaneously. To achieve this overlap, a two-stage command buffer is used. In a first stage, content is read from a second tier memory address into the storage device controller internal buffer. In a second stage, the internal buffer is written into the DRAM physical address.
Abstract:
A memory device includes a plurality of NAND flash chips, a dynamic random access memory (DRAM) portion in data communication with the NAND flash chips, and a controller. Each NAND flash chip has a first storage capacity, and includes a memory section, each memory section including a plurality of pages. The DRAM portion has a second storage capacity that is at least as large as the first storage capacity. The controller is configured to select one of the NAND flash chips as a currently selected NAND flash chip for writing data, copy all valid pages in the currently selected NAND flash chip into the DRAM portion, and, in response to a write request to a logical memory location mapped to a particular physical location in one of the NAND flash chips, allocate the currently selected NAND flash chip for writing to a particular page that includes the particular physical location.
Abstract:
A data storage device includes a plurality of flash memory devices. A memory controller is configured to receive a request from a host computing device to write a first logical block of application data to the data storage device, write the first logical block to a data buffer, wherein a size of the data buffer is larger than the logical block and may store multiple logical blocks, write one or more logical blocks of garbage-collected data to the data buffer, and write the logical blocks in the data buffer to the data storage device when the data buffer becomes full. The data buffer written to the data storage device includes at least one logical block of application data and at least one logical block of garbage-collected data. In an alternative implementation, garbage-collected data may be written to the data buffer upon expiration of a timer.
Abstract:
An apparatus includes a host device and a data storage device. The host device is configured to store a first translation map for converting a logical sector to a logical erase unit. The data storage device includes a plurality of flash memory devices and a memory controller operationally coupled with the flash memory devices, each of the flash memory devices being arranged into a plurality of erase units, each of the erase units having a plurality of pages for storing data. The memory controller is configured to receive a second translation map from the host device, the second translation map for converting a logical erase unit to a physical erase unit within the flash memory devices, and store the second translation map in a memory module on the data storage device.
Abstract:
A method includes deploying non-volatile random access memory (NVRAM) coupled to a processor or central processing unit (CPU) core of a computing device as a peripheral device via an input/output (I/O) bus, and providing a NVRAM application programming interface (API) for the CPU core to conduct NVRAM read and write operations. Providing the NVRAM API includes allocating a single memory buffer per command to hold data transferred to or from the NVRAM. The method includes configuring the processor in conjunction with the NVRAM API to set up command queues inside in the host Memory Mapped Input Output (MMIO) space.