摘要:
A data processing system and computer program product for processing instructions. The instructions are processed by a processor unit while using a first table in a plurality of tables to predict a set of instructions needed by the processor unit after processing of a conditional instruction. An identification is formed that a rate of success in correctly predicting the set of instructions when using the first table is less than a threshold number. A sequence of the instructions being processed by the processor unit is searched for an instruction that matches a marker in a set of markers for identifying when to use the plurality of tables. An identification that the instruction that matches the marker is formed. A second table from the plurality of tables referenced by the marker is identified. The second table is used in place of the first table.
摘要:
In an embodiment, a kernel performs autonomic input/output tracing and performance tuning. A first table is provided in a device driver framework and a second table in a kernel of a computer. An input/output device monitoring tool is provided in the device driver framework. A plurality of instructions in the kernel compares each value in the first table with each value in the second table. Responsive to a match of a value in the first table and a value in the second table, the kernel automatically runs a command line to perform a system trace, a component trace, or a tuning task. The first table is populated with a plurality of values calculated from a plurality of data in a plurality of device memories and in the controller memory and the second table is populated in accordance with a second plurality of inputs to the command line interface.
摘要:
According to one aspect of the present disclosure, a system and technique for variable cache line size management is disclosed. The system includes a processor and a cache hierarchy, where the cache hierarchy includes a sectored upper level cache and an unsectored lower level cache, and wherein the upper level cache includes a plurality of sub-sectors, each sub-sector having a cache line size corresponding to a cache line size of the lower level cache. The system also includes logic executable to, responsive to determining that a cache line from the upper level cache is to be evicted to the lower level cache: identify referenced sub-sectors of the cache line to be evicted; invalidate unreferenced sub-sectors of the cache line to be evicted; and store the referenced sub-sectors in the lower level cache.
摘要:
A symmetric multi-processor SMP system includes an SMP processor and operating system OS software that performs automatic SMP lock tracing analysis on an executing application program. System administrators, users or other entities initiate an automatic SMP lock tracing analysis. A particular thread of the executing application program requests and obtains a lock for a memory address pointer. A subsequent thread requests the same memory address pointer lock prior to the particular thread release of that lock. The subsequent thread begins to spin waiting for the release of that address pointer lock. When the subsequent thread reaches a predetermined maximum amount of wait time, MAXSPIN, a lock testing tool in the kernel of the OS detects the MAXSPIN condition. The OS performs a test to determine if the subsequent thread and address pointer lock meet the list of criteria set during initiation of the automatic lock trace method. The OS initiates an SMP lock trace capture automatically if all criteria or the arguments of the lock trace method are met. System administrators, software programmers, users or other entities interpret the results of the SMP lock tracing method that the OS stores in a trace table to determine performance improvements for the executing application program.
摘要:
Sharing a kernel of an operating system among logical partitions, including installing in a partition manager a kernel of a type used by a plurality of logical partitions; installing in the partition manager generic data structures specifying computer resources assigned to each of the plurality of logical partitions; and providing, by the kernel to the logical partitions, kernel services in dependence upon the generic data structures.
摘要:
Functionality is implemented to determine that a plurality of multi-core processing units of a system are configured in accordance with a plurality of operating performance modes. It is determined that a first of the plurality of operating performance modes satisfies a first performance criterion that corresponds to a first workload of a first logical partition of the system. Accordingly, the first logical partition is associated with a first set of the plurality of multi-core processing units that are configured in accordance with the first operating performance mode. It is determined that a second of the plurality of operating performance modes satisfies a second performance criterion that corresponds to a second workload of a second logical partition of the system. Accordingly, the second logical partition is associated with a second set of the plurality of multi-core processing units that are configured in accordance with the second operating performance mode.
摘要:
According to one aspect of the present disclosure, a method and technique for managing rollback in a transactional memory environment is disclosed. The method includes, responsive to detecting a begin transaction directive by a processor supporting transactional memory processing, detecting an access of a first memory location not needing rollback and indicating that the first memory location does not need to be rolled back while detecting an access to a second memory location and indicating that a rollback will be required. The method also includes, responsive to detecting an end transaction directive after the begin transaction directive and a conflict requiring a rollback, omitting a rollback of the first memory location while performing rollback on the second memory location.
摘要:
A method, system and program are provided for controlling access to a specified cache level in a cache hierarchy in a multiprocessor system by evaluating cache statistics for a specified application at the specified cache level against predefined criteria to prevent the specified application from accessing the specified cache level if the specified application does not meeting the predefined criteria.
摘要:
Mechanisms are provided, for implementation in a data processing system having at least one physical processor and at least one associated cache memory, for allocating cache resources of the at least one cache memory to virtual processors of the data processing system. The mechanisms identify a plurality of high priority virtual processors in the data processing system. The mechanisms further determine a percentage of cache lines of the at least one cache memory to be assigned to high priority virtual processors. Moreover, the mechanisms mark a portion of the cache lines in the at least one cache memory as being evictable by only high priority virtual processors based on the determined percentage of cache lines to be assigned to high priority virtual processors. The marked portion of the cache lines cannot be evicted by lower priority virtual processors having a priority lower than the high priority virtual processors.
摘要:
According to one aspect of the present disclosure, a method and technique for using processor registers for extending a cache structure is disclosed. The method includes identifying a register of a processor, identifying a cache to extend, allocating the register as an extension of the cache, and setting an address of the register as corresponding to an address space in the cache.