摘要:
A pointer is for pointing to a next-to-read location within a stack of information. For pushing information onto the stack: a value is saved of the pointer, which points to a first location within the stack as being the next-to-read location; the pointer is updated so that it points to a second location within the stack as being the next-to-read location; and the information is written for storage at the second location. For popping the information from the stack: in response to the pointer, the information is read from the second location as the next-to-read location; and the pointer is restored to equal the saved value so that it points to the first location as being the next-to-read location.
摘要:
During runtime of a binary program file, streams of instructions are executed and memory references, generated by instrumentation applied to given ones of the instructions that refer to memory locations, are collected. A transformation is performed, based on the executed streams of instructions and the collected memory references, to obtain a table. The table lists memory events of interest for active data structures for each function in the program file. The transformation is performed to translate memory addresses for given ones of the instructions and given ones of the data structures into locations and variable names in a source file corresponding to the binary file. At least the memory events of interest are displayed, and the display is organized so as to correlate the memory events of interest with corresponding ones of the data structures.
摘要:
A method of providing coherent shared memory access among a plurality of shared memory multiprocessor nodes. For each line of data in each of the nodes, a list of those processors of the node that have copies of the line in their caches is maintained. If a memory command is issued from a processor of one node, and if the command is directed to a line of memory of another node, then the memory command is sent directly to an adapter of the one node. When the adapter receives the command, it forwards the command from the one adapter to another adapter of the other node. When the other adapter receives the command, the command is forwarded to the local memory of the other node. The list of processors is then updated in the local memory of the other node to include or exclude the other adapter depending on the command. If the memory command is issued from one of the processors of one of the nodes, and if the command is directed to a line of memory of the one node, then the command is sent directly to local memory. When the local memory receives the command and if the adapter of the node is in the list of processors for a line associated with the command and if the command is a write command, then the command is forwarded to the adapter of the one node. When the adapter receives the command, the command is forwarded to remote adapters in each of the remote nodes which have processors which have cache copies of the line. Finally, when the latter remote adapters receive the command, the command is forwarded to the processors having the cache copies of the line.
摘要:
An incremental method is described for distributing the instructions of an execution sequence among a plurality of processing elements for execution in parallel. The distribution is based upon anticipated availability times of the needed input values for each instruction as well as the anticipated availability times of each processing element for handling each instruction. A self-parallelizing computer system and method are also described for asynchronously processing the distributed instructions in two modes of execution on a set of processing elements which communicate with each other.
摘要:
A method, system and computer program product are disclosed for interfacing between a multi-threaded processing core and an accelerator. In one embodiment, the method comprises copying from the processing core to the hardware accelerator memory address translations for each of multiple threads operating on the processing core, and simultaneously storing on the hardware accelerator one or more of the memory address translations for each of the threads. Whenever any one of the multiple threads operating on the processing core instructs the hardware accelerator to perform a specified operation, the hardware accelerator has stored thereon one or more of the memory address translations for the any one of the threads. This facilitates starting that specified operation without memory translation faults. In an embodiment, the copying includes, each time one of the memory address translations is updated on the processing core, copying the updated one of the memory address translations to the hardware accelerator.
摘要:
A method, system and computer program product are disclosed for maintaining data coherence, for use in a multi-node processing system where each of the nodes includes one or more components. In one embodiment, the method comprises establishing a data domain, assigning a group of the components to the data domain, sending a coherence message from a first component of the processing system to a second component of the processing system, and determining if that second component is assigned to the data domain. In this embodiment, if that second component is assigned to the data domain, the coherence message is transferred to all of the components assigned to the data domain to maintain data coherency among those components. In an embodiment, if that second component is assigned to the data domain, the first component is assigned to the data domain.
摘要:
A system, method, and computer program product for enhancing timeliness of cache memory prefetching in a processing system are provided. The system includes a stride pattern detector to detect a stride pattern for a stride size in an amount of bytes as a difference between successive cache accesses. The system also includes a confidence counter. The system further includes eager prefetching control logic for performing a method when the stride size is less than a cache line size. The method includes adjusting the confidence counter in response to the stride pattern detector detecting the stride pattern, comparing the confidence counter to a confidence threshold, and requesting a cache prefetch in response to the confidence counter reaching the confidence threshold. The system may also include selection logic to select between the eager prefetching control logic and standard stride prefetching control logic.
摘要:
A method and apparatus for creating a compressed trace for a program, wherein events are compressed separately to provide improved compression and tracing. A sequence of events for a program is selected, and a sequence of values is then determined for each of the selected events occurring during an execution of the program. Each sequence of values is then compressed to generate a compressed sequence of values for each event. These values are then ordered in accordance with information stored in selected events (such as for example, branch events), where the ordered values correspond to the trace.
摘要:
A system and method for dynamic scheduling and allocation of resources to parallel applications during the course of their execution. By establishing well-defined interactions between an executing job and the parallel system, the system and method support dynamic reconfiguration of processor partitions, dynamic distribution and redistribution of data, communication among cooperating applications, and various other monitoring actions. The interactions occur only at specific points in the execution of the program where the aforementioned operations can be performed efficiently.
摘要:
A method, system and computer program product are disclosed for interfacing between a multi-threaded processing core and an accelerator. In one embodiment, the method comprises copying from the processing core to the hardware accelerator memory address translations for each of multiple threads operating on the processing core, and simultaneously storing on the hardware accelerator one or more of the memory address translations for each of the threads. Whenever any one of the multiple threads operating on the processing core instructs the hardware accelerator to perform a specified operation, the hardware accelerator has stored thereon one or more of the memory address translations for the any one of the threads. This facilitates starting that specified operation without memory translation faults. In an embodiment, the copying includes, each time one of the memory address translations is updated on the processing core, copying the updated one of the memory address translations to the hardware accelerator.