Abstract:
A method and processing apparatus for accelerating program processing is provided that includes a plurality of processors configured to process a plurality of tasks of a program and a controller. The controller is configured to determine, from the plurality of tasks being processed by the plurality of processors, a task being processed on a first processor to be a lagging task causing a delay in execution of one or more other tasks of the plurality of tasks. The controller is further configured to provide the determined lagging task to a second processor to be executed by the second processor to accelerate execution of the lagging task.
Abstract:
A modified 1C1T cell detects when the charge in the memory cell drops below a predetermined voltage due to leakage and asserts a refresh signal indicating that refresh needs to be performed on those memory cells associated with the modified 1C1T memory cell. The associated memory cells may be a row, a bank, or other groupings of memory cells. Because temperature affects leakage current, the modified memory cell automatically adjusts for temperature.
Abstract:
A technique for accessing memory in an accelerated processing device coupled to stacked memory dies is provided herein. The technique includes receiving a memory access request from an execution unit and identifying whether the memory access request corresponds to memory cells of the stacked dies that are considered local to the execution unit or non-local. For local accesses, the access is made “directly”, that is, without using a bus. A control die coordinates operations for such local accesses, activating particular through-silicon-vias associated with the memory cells that include the data for the access. Non-local accesses are made via a distributed cache fabric and an interconnect bus in the control die. Various other features and details are provided below.
Abstract:
A method and apparatus of performing a memory operation includes receiving a memory operation request at a first memory controller that is in communication with a second memory controller. The first memory controller forwards the memory operation request to the second memory controller. Upon receipt of the memory operation request, the second memory controller provides first information or second information depending on a condition of a pseudo-bank of the second memory controller and a type of the memory operation request.
Abstract:
A method and processing apparatus for accelerating program processing is provided that includes a plurality of processors configured to process a plurality of tasks of a program and a controller. The controller is configured to determine, from the plurality of tasks being processed by the plurality of processors, a task being processed on a first processor to be a lagging task causing a delay in execution of one or more other tasks of the plurality of tasks. The controller is further configured to provide the determined lagging task to a second processor to be executed by the second processor to accelerate execution of the lagging task.
Abstract:
A method and apparatus for performing inter-lane power management includes de-energizing one or more execution lanes upon a determination that the one or more execution lanes are to be predicated. Energy from the predicated execution lanes is redistributed to one or more active execution lanes.
Abstract:
A plurality of memory blocks are connected to a computation-enabled switch that provides data paths between the plurality of memory blocks. The computation-enabled switch performs one or more computations on data stored in one or more of the plurality of memory blocks during transfer of the data along one or more of the data paths between the plurality of memory blocks.
Abstract:
A compute unit configured to execute multiple threads in parallel is presented. The compute unit includes one or more single instruction multiple data (SIMD) units and a fetch and decode logic. The SIMD units have differing numbers of arithmetic logic units (ALUs), such that each SIMD unit can execute a different number of threads. The fetch and decode logic is in communication with each of the SIMD units, and is configured to assign the threads to the SIMD units for execution based on such differing numbers of ALUs.