Abstract:
A frequency-based prediction of indirect jumps executing in a computing environment is provided. Illustratively, a computing environment comprises a prediction engine that processes data representative of indirect jumps performed by the exemplary computing environment according to a selected frequency-based prediction paradigm. Operatively, the exemplary prediction engine can keep track of targets, in a table, taken for each indirect jump and program context (e.g., branch history and/or path information) of an exemplary computing program. Further, the prediction engine can also store a frequency counter associated with each target in the exemplary table. Illustratively, the frequency counter can record the number of times a target was taken in the recent past executions of an observed one or more indirect jump. The prediction engine can supply the target address of an indirect jump based on the values of the frequency counters of each stored target address.
Abstract:
Systems and methods are provided to detect instances where dynamic predication of indirect jumps (DIP) is considered to be ineffective utilizing data collected on the recent effectiveness of dynamic predication on recently executed indirect jump instructions. Illustratively, a computing environment comprises a DIP monitoring engine cooperating with a DIP monitoring table that aggregates and processes data representative of the effectiveness of DIP on recently executed jump instructions. Illustratively, the exemplary DIP monitoring engine collects and processes historical data on DIP instances, where, illustratively, a monitored instance can be categorized according to one or more selected classifications. A comparison can be performed for currently monitored indirect jump instructions using the collected historical data (and classifications) to determine whether DIP should be invoked by the computing environment or whether to invoke other indirect jump prediction paradigms.
Abstract:
Techniques for reliable communication in an on-chip network of a multi-core processor are provided. Packets are tagged with tags that define reliability requirements for the packets. The packets are routed in accordance with the reliability requirements. The reliability requirements and routing using them can ensure reliable communication in the on-chip network.
Abstract:
A pharmaceutical composition including a compound of Formula I (Compound I) or pharmaceutically acceptable salts thereof and one or more diuretics as effective components, wherein said one or more diuretics are selected from thiazide derivatives. Methods for preparing the pharmaceutical compound including Compound I and thiazide derivatives and its use for preventing or treating hypertension in mammals, particularly in humans.
Abstract:
Techniques for reliable communication in an on-chip network of a multi-core processor are provided. Packets are tagged with tags that define reliability requirements for the packets. The packets are routed in accordance with the reliability requirements. The reliability requirements and routing using them can ensure reliable communication in the on-chip network.
Abstract:
Systems and methods are provided to detect instances where dynamic predication of indirect jumps (DIP) is considered to be ineffective utilizing data collected on the recent effectiveness of dynamic predication on recently executed indirect jump instructions. Illustratively, a computing environment comprises a DIP monitoring engine cooperating with a DIP monitoring table that aggregates and processes data representative of the effectiveness of DIP on recently executed jump instructions. Illustratively, the exemplary DIP monitoring engine collects and processes historical data on DIP instances, where, illustratively, a monitored instance can be categorized according to one or more selected classifications. A comparison can be performed for currently monitored indirect jump instructions using the collected historical data (and classifications) to determine whether DIP should be invoked by the computing environment or whether to invoke other indirect jump prediction paradigms.
Abstract:
As microprocessors incorporate more and more devices on a single chip, dedicated buses have given way to on-chip interconnection networks (“OCIN”). Routers in a bufferless OCIN as described herein rank and prioritize flits. Flits traverse a productive path towards their destination or undergo temporary deflection to other non-productive paths, without buffering. Eliminating the buffers of on-chip routers reduces power consumption and heat dissipation while freeing up chip surface area for other uses. Furthermore, bufferless design enables purely local flow control of data between devices in the on-chip network, reducing router complexity and enabling reductions in router latency. Router latency reductions are possible in the bufferless on-chip routing by using lookahead links to send data between on-chip routers contemporaneously with flit traversals.
Abstract:
A “request scheduler” provides techniques for batching and scheduling buffered thread requests for access to shared memory in a general-purpose computer system. Thread-fairness is provided while preventing short- and long-term thread starvation by using “request batching.” Batching periodically groups outstanding requests from a memory request buffer into larger units termed “batches” that have higher priority than all other buffered requests. Each “batch” may include some maximum number of requests for each bank of the shared memory and for some or all concurrent threads. Further, average thread stall times are reduced by using computed thread rankings in scheduling request servicing from the shared memory. In various embodiments, requests from higher ranked threads are prioritized over requests from lower ranked threads. In various embodiments, a parallelism-aware memory access scheduling policy improves intra-thread bank-level parallelism. Further, rank-based request scheduling may be performed with or without batching.
Abstract:
A “request scheduler” provides techniques for batching and scheduling buffered thread requests for access to shared memory in a general-purpose computer system. Thread-fairness is provided while preventing short- and long-term thread starvation by using “request batching.” Batching periodically groups outstanding requests from a memory request buffer into larger units termed “batches” that have higher priority than all other buffered requests. Each “batch” may include some maximum number of requests for each bank of the shared memory and for some or all concurrent threads. Further, average thread stall times are reduced by using computed thread rankings in scheduling request servicing from the shared memory. In various embodiments, requests from higher ranked threads are prioritized over requests from lower ranked threads. In various embodiments, a parallelism-aware memory access scheduling policy improves intra-thread bank-level parallelism. Further, rank-based request scheduling may be performed with or without batching.
Abstract:
A “request scheduler” provides techniques for batching and scheduling buffered thread requests for access to shared memory in a general-purpose computer system. Thread-fairness is provided while preventing short- and long-term thread starvation by using “request batching.” Batching periodically groups outstanding requests from a memory request buffer into larger units termed “batches” that have higher priority than all other buffered requests. Each “batch” may include some maximum number of requests for each bank of the shared memory and for some or all concurrent threads. Further, average thread stall times are reduced by using computed thread rankings in scheduling request servicing from the shared memory. In various embodiments, requests from higher ranked threads are prioritized over requests from lower ranked threads. In various embodiments, a parallelism-aware memory access scheduling policy improves intra-thread bank-level parallelism. Further, rank-based request scheduling may be performed with or without batching.