Abstract:
A memory system includes a nonvolatile memory electrically connected to a data bus, a DRAM electrically connected to the data bus, and a memory controller configured to drive the DRAM as a cache memory and the nonvolatile memory as a main memory and to synchronize data of a cache line with data of the nonvolatile memory in units of cache units based on a dirty flag. The DRAM is configured to load data of the cache line that caches data stored in the nonvolatile memory and to store the dirty flag, which indicates whether a cache unit is dirty, in units of cache units, where a size of each cache unit is smaller than a size of the cache line.
Abstract:
An electronic device includes a graphic processor and a memory device. The graphic processor includes an artificial neural network engine that makes an object recognition model learn by using learning data and weights to provide a learned object recognition model. The memory device divides a feature vector into a first sub feature vector and a second feature vector, and performs a first calculation to apply the second sub feature vector and the weights to the learned object recognition model to provide a second object recognition result. The artificial neural network engine performs a second calculation to apply the first sub feature vector and the weights to the learned object recognition model to provide a first object recognition result and provides the first object recognition result to the memory device. The second calculation is performed in parallel with the first calculation.
Abstract:
An electronic device includes a graphic processor and a memory device. The graphic processor includes an artificial neural network engine that makes an object recognition model learn by using learning data and weights to provide a learned object recognition model. The memory device, divides a feature vector into a first sub feature vector and a second feature vector, and performs a first calculation to apply the second sub feature vector and the weights to the learned object recognition model to provide a second object recognition result. The artificial neural network engine performs a second calculation to apply the first sub feature vector and the weights to the learned object recognition model to provide a first object recognition result and provides the first object recognition result to the memory device. The second calculation is performed in parallel with the first calculation.
Abstract:
An operation method of a memory controller, which is configured to control a memory module including a plurality of memory devices and at least one error correction code (ECC) device, is provided. The method includes reading a data set including user data stored in the plurality of memory devices and ECC data stored in the at least one ECC device, based on a read command and a first address, and writing uncorrectable data in a memory area, which is included in each of the plurality of memory devices and the at least one ECC device and corresponds to the first address, when an error of the user data is not corrected based on the ECC data.
Abstract:
A memory system includes a processor that includes cores and a memory controller, and a first semiconductor memory module that communicates with the memory controller. The cores receive a call to perform a first exception handling in response to detection of a first error when the memory controller reads first data from the first semiconductor memory module. A first monarchy core of the cores performs the first exception handling and the remaining cores of the cores return to remaining operations previously performed.
Abstract:
In a memory module including a memory device and a filter, the memory device operates with a clock of a reference frequency. The filter receives a multiplexed signal from a host and filters a signal of a frequency band from the multiplexed signal. The frequency band includes the reference frequency and the signal of the frequency band is provided to the memory device.
Abstract:
The present disclosure relates to operation methods of a memory device including multiple rows each including multiple memory cells. One example method includes receiving an active command for a first row from a memory controller, reading a first count from a per-row hammer tracking (PRHT) region of the first row, updating the first count to generate a first updated count, comparing the first updated count with one of first and second thresholds to generate a comparison result, wherein when the first row is adjacent to the given row, the first updated count is compared with the first threshold and when the first row is not adjacent to the given row, the first updated count is compared with the second threshold, outputting a target row address based on the comparison result, and performing a row hammer mitigation operation on a row corresponding to the target row address.
Abstract:
A memory system includes a nonvolatile memory electrically connected to a data bus, a DRAM electrically connected to the data bus, and a memory controller configured to drive the DRAM as a cache memory and the nonvolatile memory as a main memory and to synchronize data of a cache line with data of the nonvolatile memory in units of cache units based on a dirty flag. The DRAM is configured to load data of the cache line that caches data stored in the nonvolatile memory and to store the dirty flag, which indicates whether a cache unit is dirty, in units of cache units, where a size of each cache unit is smaller than a size of the cache line.
Abstract:
A non-volatile memory device having respective parallel queues is disclosed. The non-volatile memory device includes a plurality of concurrently addressable units. The non-volatile memory device has respective queues for the concurrently addressable units, and transfers a second command to respective queues for the remaining concurrently addressable units while a first command is executed in a part of the concurrently addressable units, and executes a second command in the remaining concurrently addressable units. Accordingly, non-volatile memory device may concurrently access the concurrently addressable units in parallel, and may have high speed.
Abstract:
A semiconductor memory system includes a memory device including plural banks, and a memory controller that generates an offset address for a first bank among the plural banks and a command indicating the offset address, based on a first request. The memory device generates a first address by adding the offset address to a base address for the first bank, according to the command, and performs a memory operation on the first address of the first bank according to the command.