Abstract:
In response to receiving pre-processed code, a compiler identifies a code section that is not a candidate for acceleration and a code block that is a candidate for acceleration. The code block specifies an iterated operation having a first operand and a second operand, where each of multiple first operands and each of multiple second operands for the iterated operation has a defined addressing relationship. In response to the identifying, the compiler generates post-processed code containing lower level instruction(s) corresponding to the identified code section and creates and outputs an operand data structure separate from the post-processed code. The operand data structure specifies the defined addressing relationship for the multiple first operands and for the multiple second operands. The compiler places a block computation command in the post-processed code that invokes processing of the operand data structure to compute operand addresses.
Abstract:
A processor includes at least one execution unit that executes instructions, at least one register file, coupled to the at least one execution unit, that buffers operands for access by the at least one execution unit, and an instruction sequencing unit that fetches instructions for execution by the execution unit. The processor further includes an operand data structure and an address generation accelerator. The operand data structure specifies a first relationship between addresses of sequential accesses within a first address region and a second relationship between addresses of sequential accesses within a second address region. The address generation accelerator computes a first address of a first memory access in the first address region by reference to the first relationship and a second address of a second memory access in the second address region by reference to the second relationship.
Abstract:
An accounting method and logic for determining per-thread processor resource utilization in a simultaneous multi-threaded (SMT) processor provides a mechanism for accounting for processor resource usage by programs and threads within programs. Relative resource use is determined by detecting instruction dispatches for multiple threads active within the processor, which may include idle threads that are still occupying processor resources. If instructions are dispatched for all threads or no threads, the processor cycle is accounted equally to all threads. Alternatively if no threads are in a dispatch state, the accounting may be made using a prior state, or in conformity with ratios of the threads' priority levels. If only one thread is dispatching, that thread is accounted the entire processor cycle. If multiple threads are dispatching, but less than all threads are dispatching (in processors supporting more than two threads), the processor cycle is billed evenly across the dispatching threads. Multiple dispatches may be detected for the threads and a fractional resource usage determined for each thread and the counters may be updated in accordance with their fractional usage.
Abstract:
Mechanisms are provided for offloading a workload from a main thread to an assist thread. The mechanisms receive, in a fetch unit of a processor of the data processing system, a branch-to-assist-thread instruction of a main thread. The branch-to-assist-thread instruction informs hardware of the processor to look for an already spawned idle thread to be used as an assist thread. Hardware implemented pervasive thread control logic determines if one or more already spawned idle threads are available for use as an assist thread. The hardware implemented pervasive thread control logic selects an idle thread from the one or more already spawned idle threads if it is determined that one or more already spawned idle threads are available for use as an assist thread, to thereby provide the assist thread. In addition, the hardware implemented pervasive thread control logic offloads a portion of a workload of the main thread to the assist thread.
Abstract:
A processor has an associated memory hierarchy including a cache memory. The processor includes an instruction sequencing unit that fetches instructions for processing, an operand data structure including a plurality of entries corresponding to operands of operations to be performed by the processor, and a computation engine. A first entry among the plurality of entries in the operand data structure specifies a first caching policy for a first operand, and a second entry specifies a second caching policy for a second operand. The computation engine computes and stores operands in the memory hierarchy in accordance with the cache policies indicated within the operand data structure.
Abstract:
A compiler is designed, in response to receiving pre-processed code, to identify a code section that is not a candidate for acceleration and identifying a code block specifying an iterated operation that is a candidate for acceleration. In response to identifying the code section, the compiler generates post-processed code containing one or more lower level instructions corresponding to the identified code section, and in response to identifying the code block, the compiler creates and outputs an operation data structure separate from the post-processed code that identifies the iterated operation. The compiler places a block computation command in the post-processed code that invokes processing of the operation data structure to perform the iterated operation and outputs the post-processed code.
Abstract:
A system, method, and a computer readable for inserting data into a cache memory based on information in a semi-synchronous memory copy instruction are disclosed. The method comprises determining a start of a semi-synchronous memory copy operation. The semi-synchronous memory copy operation is checked for a given value in at least one cache injection bit. In response to the given value in the cache injection bit, a predefined number of lines of destination data is copied into at least one level of cache memory.
Abstract:
A system, method, and a computer readable for inserting data into a cache memory based on information in a semi-synchronous memory copy instruction are disclosed. The method comprises determining a start of a semi-synchronous memory copy operation. The semi-synchronous memory copy operation is checked for a given value in at least one cache injection bit. In response to the given value in the cache injection bit, a predefined number of lines of destination data is copied into at least one level of cache memory.