Abstract:
A memory bus with a first bus segment coupled to a memory controller that includes control logic and a first memory device, a second bus segment coupled to a second memory device, and a switch between the first bus segment and the second bus segment. The control logic outputs control information to the switch to selectively decouple the first bus segment and the second bus segment to effect a change in the length of the memory bus to enable data transfer with respect to the first memory device at a first data rate. Additionally, the control logic may output control information to the switch to selectively couple the first bus segment and the second bus segment to increase the length of the memory bus to enable data transfer with respect to the second memory device at a second data rate that is slower than the first data rate.
Abstract:
The memory module includes front and back faces. Multiple devices are disposed on each of the faces. A first control line serially connects a first group of devices on both the front and back faces so that the first group of devices commonly contribute multiple bits to a data bus. A second control line serially connects a second group of devices on both the front and back faces so that the second group of devices commonly contribute multiple bits to a data bus.
Abstract:
A memory system configured to provide thermal regulation of a plurality of memory devices is disclosed. The memory system comprises a memory module having a plurality of memory devices coupled to a bus. Additionally, the memory system also comprises a controller coupled to the bus. The controller determines an operating temperature (actual or estimated) of the memory device. Based on the determined operating temperature of the memory device, the controller is further operable to manipulate the operation of the memory system.
Abstract:
The neural engine (20) is a hardware implementation of a neural network for use in real-time systems. The neural engine (20) includes a control circuit (26) and one or more multiply/accumulate circuits (28). Each multiply/accumulate circuit (28) includes a parallel/serial arrangement of multiple multiplier/accumulators (84) interconnected with weight storage elements (80) to yield multiple neural weightings and sums in a single clock cycle. A neural processing language is used to program the neural engine (20) through a conventional host personal computer (22). The parallel processing permits very high processing speeds to permit real-time pattern classification capability.
Abstract:
A memory controller monitors requests from one or more computer subsystems and issues one or more prefetch commands if the memory controller detects that the memory system is idle after a period of activity, or if a prefetch buffer read hit occurs. In some embodiments, results of a prefetching operations are stored in a prefetch buffer configured to provide an automatic aging mechanism, which evicts prefetched data from time to time. The prefetched data in the prefetch buffer is released and sent back to the requester in order with respect to previous memory access requests.
Abstract:
A memory controller controls access to, and the power state of a plurality of dynamic memory devices. A cache in the memory controller stores entries that indicate a current power state for a subset of the dynamic memory devices. Device state lookup logic responds to a memory access request by retrieving first information from an entry, if any, in the cache corresponding to a device address in the memory access request. The device state lookup logic generates a miss signal when the cache has no entry corresponding to the device address. It also retrieves second information indicating whether the cache is currently storing a maximum allowed number of entries for devices in a predefined mid-power state. Additional logic converts the first and second information and miss signal into at least one command selection signal and at least one update control signal. Cache update logic updates information stored in the cache in accordance with the at least one update control signal. Command issue circuitry issues power state commands and access commands to the dynamic memory devices in accordance with the at least one command selection signal and the address in the memory access request.
Abstract:
Systems and methods for reducing heat flux in memory systems are described. In various embodiments, heat flux reductions are achieved by manipulating the device IDs of individual memory devices that comprise a memory module. Through the various described techniques, the per-face heat flux can be desirably reduced. Further, in some embodiments, reductions in heat flux are achieved by providing control lines that operably connect memory devices on different faces of a memory module.
Abstract:
The memory module includes front and back faces. Multiple devices are disposed on each of the faces. A first control line serially connects a first group of devices on both the front and back faces so that the first group of devices commonly contribute multiple bits to a data bus. A second control line serially connects a second group of devices on both the front and back faces so that the second group of devices commonly contribute multiple bits to a data bus.
Abstract:
A cache-coherence protocol distributes atomic operations among multiple processors (or processor cores) that share a memory space. When an atomic operation that includes an instruction to modify data stored in the shared memory space is directed to a first processor that does not have control over the address(es) associated with the data, the first processor sends a request, including the instruction to modify the data, to a second processor. Then, the second processor, which already has control of the address(es), modifies the data. Moreover, the first processor can immediately proceed to another instruction rather than waiting for the address(es) to become available.
Abstract:
A method of operating a memory device includes receiving first and second sets of bits to be stored in multi-level cells in the device. A multi-level encoding is selected from among a plurality of multi-level encodings for storing the first and second sets of bits in the multi-level cells. Each multi-level encoding includes at least four encoding levels for a respective multi-level cell. Respective multi-level encodings have respective costs associated with programming the first and second sets of bits into the multi-level cells in accordance with the respective multi-level encodings. The multi-level encoding is selected based on the respective costs of the respective encodings. The first and second sets of bits are encoded in accordance with the selected multi-level encoding to produce encoded data for storage in the device such that a respective multi-level cell stores respective bits from both the first and second sets of bits.