Abstract:
Aspects include computing devices, systems, and methods for adjusting the assignment of tasks to processor cores in a multi-core processing system to increase operating life and maximize device performance by wear-leveling the processor cores. A reliability engine may be configured to collect operation or built in self test data of thermal output and current leakage, and historical operation time for a group of equivalent processor cores configured for the same purpose. Collected data may be applied to a weighted function to determine priorities for each equivalent processor core in the group. The reliability engine may rearrange a virtual processor identification translation table according to the priorities of the equivalent processor cores. A high level operating system may issue a process request specifying a processor core and the specified processor core may be translated to a different processor core according to the order of processor cores dictated by the priorities.
Abstract:
Aspects include computing devices, systems, and methods for adjusting the assignment of tasks to processor cores in a multi-core processing system to increase operating life and maximize device performance by wear-leveling the processor cores. A reliability engine may be configured to collect operation or built in self test data of thermal output and current leakage, and historical operation time for a group of equivalent processor cores configured for the same purpose. Collected data may be applied to a weighted function to determine priorities for each equivalent processor core in the group. The reliability engine may rearrange a virtual processor identification translation table according to the priorities of the equivalent processor cores. A high level operating system may issue a process request specifying a processor core and the specified processor core may be translated to a different processor core according to the order of processor cores dictated by the priorities.
Abstract:
Systems, methods, and computer programs are disclosed for providing error detection or correction with flash cell mapping. One embodiment is a method comprising generating raw page data for a physical page in a main array of a flash memory device. The raw page data comprises less than a capacity of the physical page generated using a non-power-of-two flash cell mapping. One or more parity bits are generated for the raw page data using an error detection or correction scheme. The method stores the raw page data and the one or more parity bits in the physical page in the main array.
Abstract:
Aspects include computing devices, systems, and methods for managing a first computing device component of a computing device in order to extend an operating life of the computing device component. In an aspect, a processing device may determine a condition estimator of the first computing device component, determine whether the condition estimator of the first computing device component indicates that a condition of the first computing device component is worse than the condition of a second computing device component, and assign workloads to the first and second computing device components to balance deterioration of the condition of the first and second computing device components in response to determining that the condition estimator of the first computing device component indicates that the condition of the first computing device component is worse than the condition of the second computing device component.
Abstract:
Pipelined logic latency in a memory system operating at a reduced frequency may be compensated for. Pipelined logic may be controlled using at least first and second clock signals. All registers of the pipelined logic may be controlled using the first clock signal when the memory system is operating at a higher frequency. However, when the memory system is operating at a reduced frequency, one or more registers may be controlled using the first clock signal, and one or more other registers may be controlled using the second clock signal.
Abstract:
Systems, methods, and computer programs are disclosed for providing error detection or correction with flash cell mapping. One embodiment is a method comprising generating raw page data for a physical page in a main array of a flash memory device. The raw page data comprises less than a capacity of the physical page generated using a non-power-of-two flash cell mapping. One or more parity bits are generated for the raw page data using an error detection or correction scheme. The method stores the raw page data and the one or more parity bits in the physical page in the main array.
Abstract:
This disclosure relates to allocating memory resources of a computing device comprising non-volatile random access memory (NVRAM) and dynamic random access memory (DRAM). An exemplary method is performed for every independently executable component of an application and includes determining attributes of the component. The method also includes associating the component with a memory profile of a plurality of memory profiles based on the attributes, wherein each memory profile of the plurality of memory profiles specifies a number of banks of the NVRAM and a number of banks of the DRAM. The method also includes causing the computing device to generate an assignment of the component to banks of the NVRAM and DRAM based on the memory profile associated with the component so the computing device can execute the component using the banks of the NVRAM and DRAM based on the assignment.
Abstract:
Aspects include computing devices, systems, and methods for implementing executing decompression of a compressed page. A computing device may determine a decompression block belonging to a compressed page that contains a code instruction requested in a memory access request. Decompression blocks, other than the decompression block containing the requested code instruction, may be selected for decompression based on their locality with respect to the decompression block containing the requested code instruction. Decompression blocks not identified for decompression may be substituted for a fault or exception code. The computing device may decompress decompression blocks identified for decompression, terminating the decompression of the compressed page upon filling all blocks with decompressed blocks, faults, or exception code. The remaining decompression blocks belonging to the compressed page may be decompressed after or concurrently with the execution of the requested code instruction.
Abstract:
Aspects include computing devices, systems, and methods for implementing executing decompression of a compressed page. A computing device may determine a decompression block belonging to a compressed page that contains a code instruction requested in a memory access request. Decompression blocks, other than the decompression block containing the requested code instruction, may be selected for decompression based on their locality with respect to the decompression block containing the requested code instruction. Decompression blocks not identified for decompression may be substituted for a fault or exception code. The computing device may decompress decompression blocks identified for decompression, terminating the decompression of the compressed page upon filling all blocks with decompressed blocks, faults, or exception code. The remaining decompression blocks belonging to the compressed page may be decompressed after or concurrently with the execution of the requested code instruction.
Abstract:
Pipelined logic latency in a memory system operating at a reduced frequency may be compensated for. Pipelined logic may be controlled using at least first and second clock signals. All registers of the pipelined logic may be controlled using the first clock signal when the memory system is operating at a higher frequency. However, when the memory system is operating at a reduced frequency, one or more registers may be controlled using the first clock signal, and one or more other registers may be controlled using the second clock signal.