Abstract:
Some die-stacked memories will contain a logic layer in addition to one or more layers of DRAM (or other memory technology). This logic layer may be a discrete logic die or logic on a silicon interposer associated with a stack of memory dies. Additional circuitry/functionality is placed on the logic layer to implement functionality to perform various computation operations. This functionality would be desired where performing the operations locally near the memory devices would allow increased performance and/or power efficiency by avoiding transmission of data across the interface to the host processor.
Abstract:
Some die-stacked memories will contain a logic layer in addition to one or more layers of DRAM (or other memory technology). This logic layer may be a discrete logic die or logic on a silicon interposer associated with a stack of memory dies. Additional circuitry/functionality is placed on the logic layer to implement functionality to perform various data movement and address calculation operations. This functionality would allow compound memory operations—a single request communicated to the memory that characterizes the accesses and movement of many data items. This eliminates the performance and power overheads associated with communicating address and control information on a fine-grain, per-data-item basis from a host processor (or other device) to the memory. This approach also provides better visibility of macro-level memory access patterns to the memory system and may enable additional optimizations in scheduling memory accesses.
Abstract:
A system includes an atomic processing engine (APE) coupled to an interconnect. The interconnect is to couple to one or more processor cores. The APE receives a plurality of commands from the one or more processor cores through the interconnect. In response to a first command, the APE performs a first plurality of operations associated with the first command. The first plurality of operations references multiple memory locations, at least one of which is shared between two or more threads executed by the one or more processor cores.
Abstract:
Multi-stack compute chip and memory architecture is described. In accordance with the described techniques, a package includes a plurality of computing stacks, and each computing stack includes at least one compute chip and a memory. The package also includes one or more interconnects that couple the computing stacks to at least one other computing stack for sharing the memory in a coherent fashion across the plurality of computing stacks.
Abstract:
A multi-die semiconductor package includes a first integrated circuit (IC) die having a first intrinsic performance level and a second IC die having a second intrinsic performance level different from the first intrinsic performance level. A power management controller distributes, based on a determined die performance differential between the first IC die and the second IC die, a level of power allocated to the semiconductor chip package between the first IC die and the second IC die. In this manner, the first IC die receives and operates at a first level of power resulting in performance exceeding its intrinsic performance level. The second IC die receives and operates at a second level of power resulting in performance below its intrinsic performance level, thereby reducing performance differentials between the IC dies.
Abstract:
The described embodiments include a computing device with an entity (a processor, a processor core, etc.) and a controller. In these embodiments, the controller, using an idle duration history, predicts a duration of a next idle period for the entity. Based on the predicted duration of the next idle period, the controller configures the entity to operate in a corresponding idle state.
Abstract:
A system, method, and memory device embodying some aspects of the present invention for remapping external memory addresses and internal memory locations in stacked memory are provided. The stacked memory includes one or more memory layers configured to store data. The stacked memory also includes a logic layer connected to the memory layer. The logic layer has an Input/Output (I/O) port configured to receive read and write commands from external devices, a memory map configured to maintain an association between external memory addresses and internal memory locations, and a controller coupled to the I/O port, memory map, and memory layers, configured to store data received from external devices to internal memory locations.
Abstract:
A die-stacked memory device incorporates a reconfigurable logic device to provide implementation flexibility in performing various data manipulation operations and other memory operations that use data stored in the die-stacked memory device or that result in data that is to be stored in the die-stacked memory device. One or more configuration files representing corresponding logic configurations for the reconfigurable logic device can be stored in a configuration store at the die-stacked memory device, and a configuration controller can program a reconfigurable logic fabric of the reconfigurable logic device using a selected one of the configuration files. Due to the integration of the logic dies and the memory dies, the reconfigurable logic device can perform various data manipulation operations with higher bandwidth and lower latency and power consumption compared to devices external to the die-stacked memory device.
Abstract:
The described embodiments include a computing device with an entity (a processor, a processor core, etc.) and a controller. In these embodiments, the controller, using an idle duration history, predicts a duration of a next idle period for the entity. Based on the predicted duration of the next idle period, the controller configures the entity to operate in a corresponding idle state.
Abstract:
The present application describes embodiments of methods for tournament prediction of power gating in processing devices. Some embodiments of the method include selecting one of a plurality of predictions of a duration of a time to a power state transition of a component in a processing device. The plurality of predictions are generated using a corresponding plurality of prediction algorithms. Some embodiments of the method also include deciding whether to transition the component from a first power state to a second power state based on the selected prediction.