摘要:
Methods are provided in which a standby server, a first main server, and a second main server to control shared input/output (I/O) adapters in a storage system are provided. The standby server is in communication with the first main server and the second main server, and the storage system is configured to operate as a dual node active system. The methods include activating the standby server in response to receiving a communication from the first main server of a fail mode of the second main server. Systems and physical computer storage media are also provided.
摘要:
One embodiment of an adapter card in accordance with the invention includes a circuit board connectable to a motherboard of a computer system. A logic chip is connected to the circuit board to provide functionality to the adapter card. One or more programmable devices are connected to the circuit board and store data read by the logic chip upon initialization. This data may include first character data to program the logic chip to have a first character and second character data to program the logic chip to have a second character. A switching mechanism is provided to switch between the first and second character data in response to an external input, thereby causing the logic chip to read one of the first and second character data.
摘要:
An apparatus, system, and method are disclosed for adapter card failover. A switch module connects a first processor complex to an adapter card through a first port as an owner processor complex. The owner processor complex manages the adapter card except for a second port and receives error messages from the adapter card. The switch module further connects a second processor complex to the adapter card through the second port as a non-owner processor complex. The non-owner processor complex manages the second port. A detection module detects a failure of the first processor complex. A setup module modifies the switch module to logically connect the second processor complex to the adapter card as the owner processor complex and to logically disconnect the first processor complex from the adapter card in response to detecting the failure.
摘要:
Updating code of a single processor in a multi-processor system includes halting transactions processed by a first processor in the system and processing of transactions by a second processor in the system are maintained. The first processor then receives new code and an operating system running on the first processor is terminated whereby all processes and threads being executed by the first processor are terminated. Execution of a self-reset of the first processor is commenced and interrupts associated with the first processor are disabled. Only those system resources exclusively associated with the first processor are reset, and memory transactions associated with the first processor are disabled. An image of the new code is copied into memory associated with the first processor, registers associated with the first processor are reset and the new code is booted by the first processor.
摘要:
Provided are a computer program product, system, and method for managing data in a first cache and a second cache. A reference count is maintained in the second cache for the page when the page is stored in the second cache. It is determined that the page is to be promoted from the second cache to the first cache. In response to determining that the reference count is greater than zero, the page is added to a Least Recently Used (LRU) end of an LRU list in the first cache. In response to determining that the reference count is less than or equal to zero, the page is added to a Most Recently Used (LRU) end of the LRU list in the first cache.
摘要:
Various embodiments for movement of partial data segments within a computing storage environment having lower and higher levels of cache by a processor are provided. In one such embodiment, a whole data segment containing one of the partial data segments is promoted to both the lower and higher levels of cache. Requested data of the whole data segment is split and positioned at a Most Recently Used (MRU) portion of a demotion queue of the higher level of cache. Unrequested data of the whole data segment is split and positioned at a Least Recently Used (LRU) portion of the demotion queue of the higher level of cache. The unrequested data is pinned in place until a write of the whole data segment to the lower level of cache completes. Additional system and computer program product embodiments are disclosed and provide related advantages.
摘要:
The population of data to be inserted into secondary data storage cache is controlled by determining a heat metric of candidate data; adjusting a heat metric threshold; rejecting candidate data provided to the secondary data storage cache whose heat metric is less than the threshold; and admitting candidate data whose heat metric is equal to or greater than the heat metric threshold. The adjustment of the heat metric threshold is determined by comparing a reference metric related to hits of data most recently inserted into the secondary data storage cache, to a reference metric related to hits of data most recently evicted from the secondary data storage cache; if the most recently inserted reference metric is greater than the most recently evicted reference metric, decrementing the threshold; and if the most recently inserted reference metric is less than the most recently evicted reference metric, incrementing the threshold.
摘要:
Systems and methods for scanning ports for work are provided. One system includes one or more processors, multiple ports, a first tracking mechanism, and a second tracking mechanism for tracking high priority work and low priority work, respectively. The processor(s) is/are configured to perform the below method. One method includes scanning the ports, finding high priority work on a port, and accepting or declining the high priority work. The method further includes changing a designation of the processor to TRUE in the first tracking mechanism if the processor accepts the high priority work such that the processor is allowed to perform the high priority work on the port. Also provided are computer storage mediums including computer code for performing the above method.
摘要:
A method to enable a user mode process to operate in a privileged execution mode is disclosed. Applicants' method provides an operating system comprising a privileged execution mode and a non-privileged execution mode, and a plurality of user mode strings operating in the non-privileged execution mode. The computing device receives a request from a first user mode string to operate in the privileged execution mode to perform one or more designated tasks. Applicants' method authorizes the first user mode string to operate in the privileged execution mode, and the first user mode string performs those one or more designated tasks using the privileged execution mode. Applicants' method continues to permit the first user mode string to operate in the privileged execution mode after completion of the one or more designated tasks.
摘要:
A sleep function capable of putting a fixed high-priority thread to sleep within a time-window is disclosed. After a sleep request has been made by a fixed high-priority thread via the sleep function, a determination is made whether or not the fixed high-priority thread is awoken before a requested sleep duration under the sleep request. If the fixed high-priority thread is awoken before the requested sleep duration, the number of tasks for the fixed high-priority thread to perform is increased in order to delay the start sleep time of the fixed high-priority thread from a point within a first time-window in which the sleep request was made to an end boundary of the first time-window.