摘要:
A method, apparatus, and computer program product for running software on an adapter. In response to a connection of a hardware interface for the adapter with a current host computer, a processor unit in the adapter determines whether a set of protocols for communicating with the current host computer to access resources is present on the adapter. In response to the set of protocols being absent on the adapter, the processor unit obtains the set of protocols from the current host computer. The processor unit identifies a set of available resources in the current host computer for use by the adapter using the set of protocols. The processor unit runs software stored on a set of storage devices in the adapter using the set of available resources identified for use by the adapter.
摘要:
A method, apparatus, and computer program product for running software on an adapter. In response to a connection of a hardware interface for the adapter with a current host computer, a processor unit in the adapter determines whether a set of protocols for communicating with the current host computer to access resources is present on the adapter. In response to the set of protocols being absent on the adapter, the processor unit obtains the set of protocols from the current host computer. The processor unit identifies a set of available resources in the current host computer for use by the adapter using the set of protocols. The processor unit runs software stored on a set of storage devices in the adapter using the set of available resources identified for use by the adapter.
摘要:
An approach is provided to identify a disabled processing core and an active processing core from a set of processing cores included in a processing node. Each of the processing cores is assigned a cache memory. The approach extends a memory map of the cache memory assigned to the active processing core to include the cache memory assigned to the disabled processing core. A first amount of data that is used by a first process is stored by the active processing core to the cache memory assigned to the active processing core. A second amount of data is stored by the active processing core to the cache memory assigned to the inactive processing core using the extended memory map.
摘要:
An approach is provided to identifying cache extension sizes that correspond to different partitions that are running on a computer system. The approach extends a first hardware cache associated with a first processing core that is included in the processor's silicon substrate with a first memory allocation from a system memory area, with the system memory area being external to the silicon substrate and the first memory allocation corresponding to one of the plurality of cache extension sizes that corresponds to one of the partitions that is running on the computer system. The approach further extends a second hardware cache associated with a second processing core also included in the processor's silicon substrate with a second memory allocation from the system memory area with the second memory allocation corresponding to another of the cache extension sizes that corresponds to a different partitions that is being executed by the second processing core.
摘要:
An approach is provided to identifying cache extension sizes that correspond to different partitions that are running on a computer system. The approach extends a first hardware cache associated with a first processing core that is included in the processor's silicon substrate with a first memory allocation from a system memory area, with the system memory area being external to the silicon substrate and the first memory allocation corresponding to one of the plurality of cache extension sizes that corresponds to one of the partitions that is running on the computer system. The approach further extends a second hardware cache associated with a second processing core also included in the processor's silicon substrate with a second memory allocation from the system memory area with the second memory allocation corresponding to another of the cache extension sizes that corresponds to a different partitions that is being executed by the second processing core.
摘要:
An approach is provided to identify a disabled processing core and an active processing core from a set of processing cores included in a processing node. Each of the processing cores is assigned a cache memory. The approach extends a memory map of the cache memory assigned to the active processing core to include the cache memory assigned to the disabled processing core. A first amount of data that is used by a first process is stored by the active processing core to the cache memory assigned to the active processing core. A second amount of data is stored by the active processing core to the cache memory assigned to the inactive processing core using the extended memory map.
摘要:
The different illustrative embodiments provide a method, apparatus, and computer program product for folding at each affinity level for a partition spanning multiple nodes. In one illustrative embodiment, a method is provided for identifying a number of domains in a number of affinity levels. A lightest loaded domain is identified in the number of domains identified. A number of nodes are identified in the lightest loaded domain identified. A lightest loaded node is identified in the number of nodes. A lightest loaded processing unit on the lightest loaded node is identified and the lightest loaded processing unit is folded.
摘要:
The different illustrative embodiments provide a method, apparatus, and computer program product for folding at each affinity level for a partition spanning multiple nodes. In one illustrative embodiment, a method is provided for identifying a number of domains in a number of affinity levels. A lightest loaded domain is identified in the number of domains identified. A number of nodes are identified in the lightest loaded domain identified. A lightest loaded node is identified in the number of nodes. A lightest loaded processing unit on the lightest loaded node is identified and the lightest loaded processing unit is folded.
摘要:
Methods, systems, and products for lock tracing at a component level. The method includes associating one or more locks with a component of the operating system; initiating lock tracing for the component; and instrumenting the component-associated locks with lock tracing program instructions in response to initiating lock tracing. The locks are selected from a group of locks configured for use by an operating system and individually comprise locking code. The component lock tracing may be static or dynamic.
摘要:
Embodiments of the invention are associated with an application process that comprises multiple threads, wherein threads of the process are disposes to run on a data processing system, and each thread can have a user mode or a kernel mode machine state, or both, selectively, when it is running. An embodiment directed to a method comprises the steps of allocating a specified memory location for each of the threads, and responsive to a given thread entering a sleep state, selectively saving the kernel mode machine state of the given thread in the specified memory location for the given thread. The saved machine state comprises the state of the given thread immediately prior to the given thread entering the sleep state. In response to detecting a hang condition in the operation of the process, a debugger is attached to the process to access at least one of the saved user mode machine states. The method further includes analyzing information provided by the at least one accessed machine state to determine the cause of the hang condition, and restoring the original state upon detachment, so the debugger attachment is completely transparent to the target process.