Abstract:
This disclosure relates to allocating memory resources of a computing device comprising non-volatile random access memory (NVRAM) and dynamic random access memory (DRAM). An exemplary method is performed for every independently executable component of an application and includes determining attributes of the component. The method also includes associating the component with a memory profile of a plurality of memory profiles based on the attributes, wherein each memory profile of the plurality of memory profiles specifies a number of banks of the NVRAM and a number of banks of the DRAM. The method also includes causing the computing device to generate an assignment of the component to banks of the NVRAM and DRAM based on the memory profile associated with the component so the computing device can execute the component using the banks of the NVRAM and DRAM based on the assignment.
Abstract:
Systems, methods, and computer programs are disclosed for method for reducing memory subsystem power. In an exemplary method, a system resource manager provides memory performance requirements for a plurality of memory clients to a double data rate (DDR) subsystem. The DDR subsystem and the system resource manager reside on a system on chip (SoC) electrically coupled to a dynamic random access memory (DRAM). A cache hit rate is determined of each of the plurality of memory clients associated with a system cache residing on the DDR subsystem. The DDR subsystem adjusts access to the DRAM based on the memory performance requirements received from the system resource manager and the cache hit rates of the plurality of memory clients.
Abstract:
Systems, methods, and computer programs are disclosed for method for reducing memory subsystem power. In an exemplary method, a system resource manager provides memory performance requirements for a plurality of memory clients to a double data rate (DDR) subsystem. The DDR subsystem and the system resource manager reside on a system on chip (SoC) electrically coupled to a dynamic random access memory (DRAM). A cache hit rate is determined of each of the plurality of memory clients associated with a system cache residing on the DDR subsystem. The DDR subsystem controls a DDR clock frequency based on the memory performance requirements received from the system resource manager and the cache hit rates of the plurality of memory clients.
Abstract:
Providing memory training of dynamic random access memory (DRAM) systems using port-to-port loopbacks, and related methods, systems, and apparatuses are disclosed. In one aspect, a first port within a DRAM system is coupled to a second port via a loopback connection. A signal is sent to the first port from a System-on-Chip (SoC), and passed to the second port through the loopback connection. The signal is then returned to the SoC, where it may be examined by a closed-loop engine of the SoC. A result corresponding to a hardware parameter may be recorded, and the process may be repeated until an optimal result for the hardware parameter is achieved at the closed-loop engine. By using a port-to-port loopback configuration, the DRAM system parameters regarding timing, power, and other parameters associated with the DRAM system may be trained more quickly and with lower boot memory usage.
Abstract:
Providing memory training of dynamic random access memory (DRAM) systems using port-to-port loopbacks, and related methods, systems, and apparatuses are disclosed. In one aspect, a first port within a DRAM system is coupled to a second port via a loopback connection. A training signal is sent to the first port from a System-on-Chip (SoC), and passed to the second port through the loopback connection. The training signal is then returned to the SoC, where it may be examined by a closed-loop training engine of the SoC. A training result corresponding to a hardware parameter may be recorded, and the process may be repeated until an optimal result for the hardware parameter is achieved at the closed-loop training engine. By using a port-to-port loopback configuration, the DRAM system parameters regarding timing, power, and other parameters associated with the DRAM system may be trained more quickly and with lower boot memory usage.
Abstract:
Power saving techniques for memory systems are disclosed. In particular, exemplary aspects of the present disclosure contemplate taking advantage of patterns that may exist within memory elements and eliminating duplicative data transfers. Specifically, if data is repetitive, instead of sending the same data repeatedly, the data may be sent only a single time with instructions that cause the data to be replicated at a receiving end to restore the data to its original repeated state. By reducing the amount of data that is transferred between a host and a memory element, power consumption is reduced.
Abstract:
Systems, methods, and computer programs are disclosed for providing compressed data storage using non-power-of-two flash cell mapping. One embodiment of a method comprises receiving one or more compressed logical pages to be stored in a NAND flash memory. Binary data in the one or more logical pages is transformed to a quinary representation. The quinary representation comprises a plurality of quinary bits. A binary representation of each of the plurality of quinary bits is transmitted to the NAND flash memory. The binary representation of each of the plurality of quinary bits is converted to a quinary voltage for a corresponding cell in a physical page in the NAND flash memory.
Abstract:
Systems, methods, and computer programs are disclosed for allocating memory in a portable computing device having a non-uniform memory architecture. One embodiment of a method comprises: receiving from a process executing on a first system on chip (SoC) a request for a virtual memory page, the first SoC electrically coupled to a second SoC via an interchip interface, the first SoC electrically coupled to a first local volatile memory device via a first high-performance bus and the second SoC electrically coupled to a second local volatile memory device via a second high-performance bus; determining whether a number of available physical pages on the first and second local volatile memory devices exceeds a minimum threshold for initiating replication of memory data between the first and second local volatile memory devices; and if the minimum threshold is exceeded, allocating a first physical address on the first local volatile memory device and a second physical address on the second local volatile memory device to a single virtual page address.
Abstract:
Systems, methods, and computer programs are disclosed for recovering from dynamic random access memory (DRAM) defects. One method comprises determining that an uncorrected bit error has occurred for a physical codeword address associated with a dynamic random access memory (DRAM) device coupled to a system on chip (SoC). A kernel page associated with a DRAM page comprising the physical codeword address is identified as a bad page. Recovery from the uncorrected bit error is provided by rebooting a system comprising the SoC and the DRAM device. In response to the rebooting, the identified kernel page is excluded from being allocated for DRAM operation.
Abstract:
Methods and systems for an in-system repair process that repairs or attempts to repair random bit failures in a memory device are provided. In some examples, an in-system repair process may select alternative steps depending on whether the failure is correctable or uncorrectable. In these examples, the process uses communications between a system on chip and the memory to fix the failures during normal operation.