摘要:
A method, apparatus, and computer program product for running software on an adapter. In response to a connection of a hardware interface for the adapter with a current host computer, a processor unit in the adapter determines whether a set of protocols for communicating with the current host computer to access resources is present on the adapter. In response to the set of protocols being absent on the adapter, the processor unit obtains the set of protocols from the current host computer. The processor unit identifies a set of available resources in the current host computer for use by the adapter using the set of protocols. The processor unit runs software stored on a set of storage devices in the adapter using the set of available resources identified for use by the adapter.
摘要:
A method, apparatus, and computer program product for running software on an adapter. In response to a connection of a hardware interface for the adapter with a current host computer, a processor unit in the adapter determines whether a set of protocols for communicating with the current host computer to access resources is present on the adapter. In response to the set of protocols being absent on the adapter, the processor unit obtains the set of protocols from the current host computer. The processor unit identifies a set of available resources in the current host computer for use by the adapter using the set of protocols. The processor unit runs software stored on a set of storage devices in the adapter using the set of available resources identified for use by the adapter.
摘要:
A data processing system and computer program product for processing instructions. The instructions are processed by a processor unit while using a first table in a plurality of tables to predict a set of instructions needed by the processor unit after processing of a conditional instruction. An identification is formed that a rate of success in correctly predicting the set of instructions when using the first table is less than a threshold number. A sequence of the instructions being processed by the processor unit is searched for an instruction that matches a marker in a set of markers for identifying when to use the plurality of tables. An identification that the instruction that matches the marker is formed. A second table from the plurality of tables referenced by the marker is identified. The second table is used in place of the first table.
摘要:
According to one aspect of the present disclosure, a system and technique for variable cache line size management is disclosed. The system includes a processor and a cache hierarchy, where the cache hierarchy includes a sectored upper level cache and an unsectored lower level cache, and wherein the upper level cache includes a plurality of sub-sectors, each sub-sector having a cache line size corresponding to a cache line size of the lower level cache. The system also includes logic executable to, responsive to determining that a cache line from the upper level cache is to be evicted to the lower level cache: identify referenced sub-sectors of the cache line to be evicted; invalidate unreferenced sub-sectors of the cache line to be evicted; and store the referenced sub-sectors in the lower level cache.
摘要:
Functionality is implemented to determine that a plurality of multi-core processing units of a system are configured in accordance with a plurality of operating performance modes. It is determined that a first of the plurality of operating performance modes satisfies a first performance criterion that corresponds to a first workload of a first logical partition of the system. Accordingly, the first logical partition is associated with a first set of the plurality of multi-core processing units that are configured in accordance with the first operating performance mode. It is determined that a second of the plurality of operating performance modes satisfies a second performance criterion that corresponds to a second workload of a second logical partition of the system. Accordingly, the second logical partition is associated with a second set of the plurality of multi-core processing units that are configured in accordance with the second operating performance mode.
摘要:
According to one aspect of the present disclosure, a method and technique for managing rollback in a transactional memory environment is disclosed. The method includes, responsive to detecting a begin transaction directive by a processor supporting transactional memory processing, detecting an access of a first memory location not needing rollback and indicating that the first memory location does not need to be rolled back while detecting an access to a second memory location and indicating that a rollback will be required. The method also includes, responsive to detecting an end transaction directive after the begin transaction directive and a conflict requiring a rollback, omitting a rollback of the first memory location while performing rollback on the second memory location.
摘要:
Mechanisms are provided, for implementation in a data processing system having at least one physical processor and at least one associated cache memory, for allocating cache resources of the at least one cache memory to virtual processors of the data processing system. The mechanisms identify a plurality of high priority virtual processors in the data processing system. The mechanisms further determine a percentage of cache lines of the at least one cache memory to be assigned to high priority virtual processors. Moreover, the mechanisms mark a portion of the cache lines in the at least one cache memory as being evictable by only high priority virtual processors based on the determined percentage of cache lines to be assigned to high priority virtual processors. The marked portion of the cache lines cannot be evicted by lower priority virtual processors having a priority lower than the high priority virtual processors.
摘要:
According to one aspect of the present disclosure, a method and technique for using processor registers for extending a cache structure is disclosed. The method includes identifying a register of a processor, identifying a cache to extend, allocating the register as an extension of the cache, and setting an address of the register as corresponding to an address space in the cache.
摘要:
According to one aspect of the present disclosure, a system and technique for variable cache line size management is disclosed. The system includes a processor and a cache hierarchy, where the cache hierarchy includes a sectored upper level cache and an unsectored lower level cache, and wherein the upper level cache includes a plurality of sub-sectors, each sub-sector having a cache line size corresponding to a cache line size of the lower level cache. The system also includes logic executable to, responsive to determining that a cache line from the upper level cache is to be evicted to the lower level cache: identify referenced sub-sectors of the cache line to be evicted; invalidate unreferenced sub-sectors of the cache line to be evicted; and store the referenced sub-sectors in the lower level cache.
摘要:
Functionality is implemented to determine that a plurality of multi-core processing units of a system are configured in accordance with a plurality of operating performance modes. It is determined that a first of the plurality of operating performance modes satisfies a first performance criterion that corresponds to a first workload of a first logical partition of the system. Accordingly, the first logical partition is associated with a first set of the plurality of multi-core processing units that are configured in accordance with the first operating performance mode. It is determined that a second of the plurality of operating performance modes satisfies a second performance criterion that corresponds to a second workload of a second logical partition of the system. Accordingly, the second logical partition is associated with a second set of the plurality of multi-core processing units that are configured in accordance with the second operating performance mode.