摘要:
A memory includes non-volatile memory devices, each of which has multiple nonvolatile memory cells. A write controller streams bits to the memory devices in groups of N bits using a write data channel having write bus drivers, receivers and write bus topology that take advantage of high-speed signaling to optimize a speed of writing to the memory devices. Consecutive groups of bits are written to consecutive memory cells within respective memory devices. A self-referenced read controller reads bits from the memory devices using a read channel having read drivers, receivers, and read bus topology that include no design requirements for high-speed or low-latency data transmission.
摘要:
The present invention provides a storage subsystem capable of realizing a backend network enabling SSDs arranged in a small number of rows to be shared among controllers, and capable of having a large number of HDDs loaded thereto, according to which the performance of the storage is enhanced. In a topology of the backend network, the respective controllers 102 and 152 and the enclosure expanders are respectively connected via top expanders 110 and 160, the top expanders 110 and 160 are connected via a central expander 140, expanders 119, 120 and 121 for connecting SSDs to the top expander 110 are mutually connected in parallel, expanders 169, 170 and 171 for connecting SSDs to the top expander 160 are mutually connected in parallel, expanders 122, 123 and 124 for connecting HDDs to the top expander 110 are connected in series, and expanders 172, 173 and 174 for connecting HDDs to the top expander 160 are connected in series.
摘要:
The interface between a memory device and a device requesting data from the memory device ensures that the data requested are read from the memory device and forwarded to the device requesting the data. The interface described is distinguished by the fact that if, following the reading of data from the memory device, there are no further requests from the device requesting data, it modifies the address previously used to read data from the memory device and arranges for the data stored at the address in the memory device to be read, and/or in that, at a predefined time following the initiation of the read operation, it accepts the data output by the memory device and/or starts the next memory access.
摘要:
A memory sub-system includes a main memory, a storage device, a control unit, and a common interface unit. The control unit is configured to control the main memory and the storage device. The common interface unit is operatively coupled to the control unit, and is configured to access the main memory and the storage device through the control unit in response to a request received from a host.
摘要:
A media distribution system is provided in which a primary means of transport for digital media is through a device with housing shaped as an optical disc and insertable into various current and future optical disc drive devices. Media travels from different digital sources such as a personal media library and other networked resources to embedded memory on the optical disc shaped device via a capable personal computer or electronic device. This media is then able to be presented in the most appropriate format in a number of different types of current and legacy devices with optical drives such as CD audio devices and DVD players.
摘要:
One or both of read and write accesses to a fabric-attached memory module via a fabric interconnect are monitored. In one or more implementations, offloading of one or more tasks accessing the fabric-attached memory module to a processor of a routing system associated with the fabric-attached memory module is initiated based on the read and write accesses to the fabric-attached memory module. Additionally or alternatively, replicating memory of the fabric-attached memory module to a cache memory of a computing node in the disaggregated memory system executing one or more tasks of a host application is initiated based on the write accesses to the fabric-attached memory module.
摘要:
One or both of read and write accesses to a fabric-attached memory module via a fabric interconnect are monitored. In one or more implementations, offloading of one or more tasks accessing the fabric-attached memory module to a processor of a routing system associated with the fabric-attached memory module is initiated based on the read and write accesses to the fabric-attached memory module. Additionally or alternatively, replicating memory of the fabric-attached memory module to a cache memory of a computing node in the disaggregated memory system executing one or more tasks of a host application is initiated based on the write accesses to the fabric-attached memory module.
摘要:
A memory system includes a memory device including a plurality of memory blocks, each memory block including memory cells capable of storing multi-bit data, and a controller configured to allocate the plurality of memory blocks for plural zoned namespaces input from an external device and access a memory block allocated for one of the plural zoned namespaces which is input along with a data input/output request. In response to a first request input from the external device, the controller adjusts a number of bits of data stored in a memory cell included in a memory block, which is allocated for at least one zoned namespace among the plural zoned namespaces, and fixes a storage capacity of the at least one zoned namespace.
摘要:
Techniques are provided for implementing a garbage collection process and a prediction read ahead mechanism to prefetch keys into memory to improve the efficiency and speed of the garbage collection process. A log structured merge tree is used to store keys of key-value pairs within a key-value store. If a key is no longer referenced by any worker nodes of a distributed storage architecture, then the key can be freed to store other data. Accordingly, garbage collection is performed to identify and free unused keys. The speed and efficiency of garbage collection is improved by dynamically adjusting the amount and rate at which keys are prefetched from disk and cached into faster memory for processing by the garbage collection process.
摘要:
The present disclosure includes apparatuses, methods, and systems for using a local ledger block chain for secure updates. An embodiment includes a memory, and circuitry configured to receive a global block to be added to a local ledger block chain for validating an update for data stored in the memory, where the global block to be added to the local ledger block chain includes a cryptographic hash of a current local block in the local ledger block chain, a cryptographic hash of the data stored in the memory to be updated, where the current local block in the local ledger block chain has a digital signature associated therewith that indicates the global block is from an authorized entity.