摘要:
A computer implemented method, data processing system, and computer program product for nominating rules or policies for promotion through a policy hierarchy. An administrator at any level in a policy hierarchy may create a rule or policy. The administrator may then nominate the rule or policy for inclusion in a next higher level in the policy hierarchy. The rule or policy is evaluated at the next higher level. Responsive to an approval of the next higher level to include the rule or policy in the jurisdiction of the next higher level, the rule of policy is provided to all users under the jurisdiction. The nominating, evaluating, and providing steps may then be repeated for each higher level in the policy hierarchy.
摘要:
In a data processing system capable of concurrently executing multiple hardware threads of execution, an intermediate address translation unit in a processing unit translates an effective address for a memory access into an intermediate address. A cache memory is accessed utilizing the intermediate address. In response to a miss in cache memory, the intermediate address is translated into a real address by a real address translation unit that performs address translation for multiple hardware threads of execution. The system memory is accessed with the real address.
摘要:
A method and computer system for efficiently handling high contention locking in a multiprocessor computer system. At least some of the processors in the system are organized into a hierarchy, and process an interruptible lock in response to the hierarchy. The method utilizes two alternative methods of acquiring the lock, including a conditional lock acquisition primitive and an unconditional lock acquisition primitive, and an unconditional lock release primitive for releasing the lock from a particular processor. To prevent races between processors requesting a lock acquisition and a processor releasing the lock, a release flag is utilized. Furthermore, in order to ensure that the a processor utilizing the unconditional lock acquisition primitive is granted the lock, a handoff flag is utilized.
摘要:
A method and computer system for efficiently handling high contention locking in a multiprocessor computer system. The method organizes at least some of the processors in the system into a hierarchy, and processes an interruptible lock in response to the hierarchy. The method utilizes two alternative methods of acquiring the lock, including a conditional lock acquisition primitive and an unconditional lock acquisition primitive, and an unconditional lock release primitive for releasing the lock from a particular processor. In order to prevent races between processors requesting a lock acquisition and a processor releasing the lock, a release flag is utilized. Furthermore, in order to ensure that the a processor utilizing the unconditional lock acquisition primitive is granted the lock, a handoff flag is utilized. Accordingly, efficiency of a computer system may be enhanced with the ability to utilize a locking primitive for an interruptible lock that determines lock selection among processors based upon a hierarchical position of the processor and the primitive utilized for lock selection.
摘要:
A method and an apparatus for performing just-in-time data prefetching within a data processing system comprising a processor, a cache or prefetch buffer, and at least one memory storage device. The apparatus comprises a prefetch engine having means for issuing a data prefetch request for prefetching a data cache line from the memory storage device for utilization by the processor. The apparatus further comprises logic/utility for dynamically adjusting a prefetch distance between issuance by the prefetch engine of the data prefetch request and issuance by the processor of a demand (load request) targeting the data/cache line being returned by the data prefetch request, so that a next data prefetch request for a subsequent cache line completes the return of the data/cache line at effectively the same time that a demand for that subsequent data/cache line is issued by the processor.
摘要:
A method and an apparatus for enabling a prefetch engine to detect and support hardware prefetching with different streams in received accesses. Multiple (simple) history tables are provided within (or associated with) the prefetch engine. Each of the multiple tables is utilized to detect different access patterns. The tables are indexed by different parts of the address and are accessed in a preset order to reduce the interference between different patterns. When an address does not fit the patterns of a first table, the address is passed to the next table to be checked for a match of different patterns. In this manner, different patterns may be detected at different tables within a single prefetch engine.
摘要:
A computer implemented method, data processing system, and computer program product for nominating rules or policies for promotion through a policy hierarchy. An administrator at any level in a policy hierarchy may create a rule or policy. The administrator may then nominate the rule or policy for inclusion in a next higher level in the policy hierarchy. The rule or policy is evaluated at the next higher level. Responsive to an approval of the next higher level to include the rule or policy in the jurisdiction of the next higher level, the rule of policy is provided to all users under the jurisdiction. The nominating, evaluating, and providing steps may then be repeated for each higher level in the policy hierarchy.
摘要:
A method of assigning virtual memory to physical memory in a data processing system allocates a set of contiguous physical memory pages for a new page mapping, instructs the memory controller to move the virtual memory pages according to the new page mapping, and then allows access to the virtual memory pages using the new page mapping while the memory controller is still copying the virtual memory pages to the set of physical memory pages. The memory controller can use a mapping table which temporarily stores entries of the old and new page addresses, and releases the entries as copying for each entry is completed. The translation look aside buffer (TLB) entries in the processor cores are updated for the new page addresses prior to completion of copying of the memory pages by the memory controller. The invention can be extended to non-uniform memory array (NUMA) systems. For systems with cache memory, any cache entry which is affected by the page move can be updated by modifying its address tag according to the new page mapping. This tag modification may be limited to cache entries in a dirty coherency state. The cache can further relocate a cache entry based on a changed congruence class for any modified address tag.
摘要:
A method and memory controller for adaptive row management within a memory subsystem provides metrics for evaluating row access behavior and dynamically adjusting the row management policy of the memory subsystem in conformity with measured metrics to reduce the average latency of the memory subsystem. Counters provided within the memory controller track the number of consecutive row accesses and optionally the number of total accesses over a measurement interval. The number of counted consecutive row accesses can be used to control the closing of rows for subsequent accesses, reducing memory latency. The count may be validated using a second counter or storage for improved accuracy and alternatively the row close count may be set via program or logic control in conformity with a count of consecutive row hits in ratio with a total access count. The control of row closure may be performed by a mode selection between always closing a row (non-page mode) or always holding a row open (page mode) or by intelligently closing rows after a count interval (row hold count) determined from the consecutive row access measurements. The logic and counters may be incorporated within the memory controller or within the memory devices and the controller/memory devices may provide I/O ports or memory locations for reading the count values and/or setting a row management mode or row hold count.
摘要:
A multiprocessor system includes a plurality of data processing nodes. Each node has a processor coupled to a system memory, a cache memory, and a cache directory. The cache directory contains cache coherency information for a predetermined range of system memory addresses. An interconnection enables the nodes to exchange messages. A node initiating a function shipping request identifies an intermediate destination directory based on a list of the function's operands and sends a message indicating the function and its corresponding operands to the identified destination directory. The destination cache directory determines a target node based, at least in part, on its cache coherency status information to reduce memory access latency by selecting a target node where all or some of the operands are valid in the local cache memory. The destination directory then ships the function to the target node over the interconnection.