摘要:
A snoop coherency method, system and program are provided for intervening a requested cache line from a plurality of candidate memory sources in a multiprocessor system on the basis of the sensed temperature or power dissipation value at each memory source. By providing temperature or power dissipation sensors in each of the candidate memory sources (e.g., at cores, cache memories, memory controller, etc.) that share a requested cache line, control logic may be used to determine which memory source should source the cache line by using the power sensor signals to signal only the memory source with acceptable power dissipation to provide the cache line to the requester.
摘要:
A method and apparatus for enabling protection of a particular member of a cache during LRU victim selection. LRU state array includes additional “protection” bits in addition to the state bits. The protection bits serve as a pointer to identify the location of the member of the congruence class that is to be protected. A protected member is not removed from the cache during standard LRU victim selection, unless that member is invalid. The protection bits are pipelined to MRU update logic, where they are used to generate an MRU vector. The particular member identified by the MRU vector (and pointer) is protected from selection as the next LRU victim, unless the member is Invalid. The make MRU operation affects only the lower level LRU state bits arranged a tree-based structure and thus only negates the selection of the protected member, without affecting LRU victim selection of the other members.
摘要:
A method and apparatus for preventing selection of Deleted (D) members as an LRU victim during LRU victim selection. During each cache access targeting the particular congruence class, the deleted cache line is identified from information in the cache directory. A location of a deleted cache line is pipelined through the cache architecture during LRU victim selection. The information is latched and then passed to MRU vector generation logic. An MRU vector is generated and passed to the MRU update logic, which is selects/tags the deleted member as a MRU member. The make MRU operation affects only the lower level LRU state bits arranged in a tree-based structure state bits so that the make MRU operation only negates selection of the specific member in the D state, without affecting LRU victim selection of the other members.
摘要:
According to one aspect of the present disclosure a method and technique for managing data transfer is disclosed. The method includes comparing, by a processor unit of a data processing system, data to be written to a memory subsystem to a stored data pattern and, responsive to determining that the data matches the stored data pattern, replacing the matching data with a pattern tag corresponding to the matching data pattern. The method also includes transmitting the pattern tag to the memory subsystem.
摘要:
A multiprocessor system having plural heterogeneous processing units schedules instruction sets for execution on a selected of the processing units by matching workload processing characteristics of processing units and the instruction sets. To establish an instruction set's processing characteristics, the homogeneous instruction set is executed on each of the plural processing units with one or more performance metrics tracked at each of the processing units to determine which processing unit most efficiently executes the instruction set. Instruction set workload processing characteristics are stored for reference in scheduling subsequent execution of the instruction set.
摘要:
A method, system and computer-usable medium are disclosed for managing transient instruction streams. Transient flags are defined in Branch-and-Link (BRL) instructions that are known to be infrequently executed. A bit is likewise set in a Special Purpose Register (SPR) of the hardware (e.g., a core) that is executing an instruction request thread. Subsequent fetches or prefetches in the request thread are treated as transient and are not written to lower-level caches. If an instruction is non-transient, and if a lower-level cache is non-inclusive of the L1 instruction cache, a fetch or prefetch miss that is obtained from memory may be written in both the L1 and the lower-level cache. If it is not inclusive, a cast-out from the L1 instruction cache may be written in the lower-level cache.
摘要:
According to one aspect of the present disclosure a system and computer program product for managing memory access is disclosed. The system includes a plurality of memory controllers each configured to maintain memory databus utilization by a corresponding processor at or below a threshold to maintain memory databus utilization of the system at or below a system threshold. The system also includes a service processor configured to receive memory databus utilization data from the memory controllers and programmed to, in response to determining that memory databus utilization for at least one of the processors is below its threshold, reallocate at least a portion of unused databus utilization from the at least one processor to at least one of the other processors.
摘要:
A subset of a workload, which includes a total set of dynamic instructions, is identified to use as a trace. Processor unit hardware executes the entire workload in real-time using a particular dataset. The processor unit hardware includes at least one microprocessor and at least one cache. The real-time execution of the workload is monitored to obtain information about how the processor unit hardware executes the workload when the workload is executed using the particular dataset to form actual performance information. Multiple different subsets of the workload are generated. The execution of each one of the subsets by the processor unit hardware is compared with the actual performance information. A result of the comparison is used to select one of the plurality of different subsets that most closely represents the execution of the entire workload using the particular dataset to use as a trace.
摘要:
Computer implemented method, system, and computer usable program code for simulating processor operation in a data processing system. An instruction trace is generated, wherein the instruction trace includes markers specified by a user for identifying interval boundaries for at least one interval of the instruction trace. The instruction trace is divided into a plurality of intervals in consideration of the markers, and the plurality of intervals are formed into a plurality of interval clusters, wherein each interval cluster represents one phase of execution of the instruction trace. At least one interval from each of the plurality of interval clusters is selected as a trace sample to provide a plurality of trace samples, wherein each selected interval is of at least a minimum size, a simulation is performed using the plurality of trace samples, and a result of the simulation is provided to the user.
摘要:
A method to manage power in a computing device comprising a controller assembly and a storage assembly comprising a plurality of data storage devices, by selecting a processor parameter, establishing a threshold processor parameter value, establishing a threshold over-parameter time interval, selecting a data storage device parameter, and establishing a nominal data storage device parameter value. The method determines an actual processor parameter value. If the actual processor parameter value is less than or equal to the threshold processor parameter value, the method operates each of the plurality of data storage devices using the nominal data storage device parameter value. If the actual processor parameter value is greater than the threshold processor parameter value, then the method determines an actual over-parameter time interval. If the actual processor parameter value is greater than the threshold processor parameter value, and if the actual over-parameter time interval is greater than the threshold over-parameter time interval, then the method operates each of the plurality of data storage devices using a data storage device parameter value less than the nominal data storage device parameter value.