摘要:
A procedure controls execution of priority ordered tasks in a multi-nodel data processing system. The data processing system includes a node with a software-controlled processor and a hardware-configured queue-controller. The queue-controller includes a plurality of priority-ordered queues, each queue listing tasks having an assigned priority equal to a priority order assigned to the queue. The queue-controller responds to a processor generated order to queue a first task for execution, by performing a method which includes the steps of: listing said first task on a first queue having an assigned priority that is equal to a priority of said first task; if a second task is listed on a queue having a higher assigned priority, attempting execution of the second task before execution of the first task; if no tasks are listed on a queue having a higher assigned priority than said first queue, attempting execution of a first listed task in the first queue means; and upon completion of execution of the task or a stalling of execution of the task, attempting execution of a further task on the first queue only if another order has not been issued to place a task on a queue having a higher assigned priority. The method further handles chained subtasks by attempting execution of each subtask of a task in response to the processor generated order; and if execution of any subtask does not complete, attempting execution of another task in lieu of a subtask chained to the subtask that did not complete.
摘要:
An apparatus for dynamically allocating memory includes a processor, a free buffer pool memory and a control memory which stores control block data structures. The control block data structures enable a segmentation of the free buffer pool memory into a series of free buffer pools, each free buffer pool comprising plural identical size buffers, each succeeding free buffer pool including a larger buffer size than a preceding free buffer pool. A selection size parameter for a given free buffer pool is a value that is larger than the buffer size comprising the given free buffer pool, but less than a next larger buffer size in the next of the series of free buffer pools. A memory allocation procedure responds to a request from an executing procedure for allocation of buffer space by: (i) allocating a buffer from a free buffer pool memory whose associated selection size parameter is a next larger value than the buffer space that was requested; (ii) determining a difference between the allocated buffer size and the requested buffer space to find an unfulfilled amount of the requested buffer space; (iii) allocating a buffer from a free buffer pool memory whose selection size parameter is a next larger value, among selection size parameters, than the unfulfilled amount; and (iv) repeating ii and iii until the memory allocation procedure determines that there is no unfulfilled amount of the requested buffer space. The apparatus further includes "quickcell" memory which is allocated without use of control block data structures.
摘要:
A distributed data processing system includes a plurality of nodes interconnected by bidirectional communication links. Each node includes a control message line for handling of control messages and a control memory for storing the control messages. Each node further includes data message line for handling of data messages and a data memory for storing the data messages. A processor in the node causes the data message line to queue and dispatch data messages from the data memory and the control message line to queue and dispatch control messages from the control memory. Each node includes N bidirectional communication links enabling the node to have at least twice as much input/output bandwidth as the control message line and data message line, combined. An input/output switch includes a routing processor and is coupled between the N bidirectional communication links, the data message line and control message line. The input/output switch dispatches either a control message or a data message over at least one of the bidirectional communication links in accordance with an output from the routing control processor, thereby enabling each communication link to carry either data or control messages. If a communication link is busy with either a control or a data message, the routing control processor increments to another communication link to enable dispatch of a queued message.
摘要:
A method enables a host processor, which employs variable length (VL) records, to communicate with disk storage which employs fixed length (FL) sectors for storage of the VL records. The method comprises the steps of: a) deriving a first control data structure for an update VL record, the first control data structure including information describing segments of the update VL record; b) determining a disk track that includes a FL sector wherein am old VL record commences that corresponds to the update VL record; c) reading each FL sector in the disk track and creating a control data structure which includes information describing each VL record stored in the disk track; d) substituting in a control data structure for the old VL record that corresponds to the update VL record, information regarding update data from the first control data structure; e) recording in the disk track, data indicated by each control data structure determined in steps c) and d); and f) if the old VL record ends at other than a sector break of a FL sector, reblocking VL records into FL sectors which are recorded thereafter on the disk track. The invention also enables a read action to be accomplished in one rotation of a disk even though it commences at a FL sector that is not at the beginning of a VL record to be accessed.
摘要:
A method enables a host processor, which employs variable length (VL) records, to transparently communicate with disk storage which employs fixed length (FL) sectors for storage of the VL records. The method comprises the steps of: a) deriving a first control data structure for an update VL record, the first control data structure including information describing segments of the update VL record; b) determining an FL sector wherein an old VL record commences that corresponds to the update VL record; c) if the old VL record commences at other than a sector break of the FL sector, deriving a second control data structure for a portion of a prior VL record that immediately precedes the old VL record and a third control data structure for the old VL record; d) substituting in the third control data structure, information regarding update segments of the update VL record from the first control data structure; and recording in the FL sector determined in c), data indicated by the second control data structure and at least a portion of the update VL record, through use of the third control structure as altered in d).
摘要:
Conflicts are resolved between competing nodes in a multi-node communications network. After a first node in the network requests an initiation of communications with a target node, the requesting node may simply initiate the requested communications with the target node if the target node is not busy. If the first node determines that the target node is busy, it proceeds to resolve the conflict. Namely, the first node repeats the process of waiting for a first delay then requesting initiation of communications with the target node. After each unsuccessful attempt, the first delay is successively increased. As an example, the delay may be increased exponentially, with a controlled randomness added. After a or more queued messages to other nodes. Following this, the first node performs another sequence to initiate communications with the target node, successively increasing the delay between unsuccessful attempts, as before. After a predetermined number of unsuccessful passes through the foregoing routine, the first node proceeds to take appropriate action, such as initiating an error recovery routine, sending the message via different hardware components, or issuing an error message.
摘要:
A first logical partition in a first processing complex of a server cluster is operated in an active mode and a second logical partition in the processing complex is operated in a standby mode. Upon detection of a failure in a second processing complex of the server cluster. the standby mode logical partition in the first processing complex is activated to an active mode. In one embodiment, partition resources are transferred from an active mode logical partition to the logical partition activated from standby mode. Other embodiments are described and claimed.
摘要:
A first logical partition in a first processing complex of a server cluster is operated in an active mode and a second logical partition in the processing complex is operated in a standby mode. Upon detection of a failure in a second processing complex of the server cluster. the standby mode logical partition in the first processing complex is activated to an active mode. In one embodiment, partition resources are transferred from an active mode logical partition to the logical partition activated from standby mode. Other embodiments are described and claimed.
摘要:
Provided are a method, system, and program for destaging a track from cache to a storage device. The destaged track is retained in the cache. Verification is made of whether the storage device successfully completed writing data. Indication is made of destaged tracks eligible for removal from the cache that were destaged before the storage device is verified in response to verifying that the storage device is successfully completing the writing of data.
摘要:
A data storage system provides generalized record caching through a control unit adapted to support track caching in the upper level store of a two level memory. Dynamic reallocation of space between each type of caching in the upper store follows operating patterns of host computer systems using the data storage system. A storage controller cache has a plurality of segments. A directory data entry data structure is allocated each segment. Such allocated directory entries are used to identify tracks as cached. A plurality of unallocated directory entries are also provided. As a record is cached in a segment outside of a track slot, an unallocated directory entry is used to identify a virtual track in cache corresponding to the track of the record in the lower level store. Records from one track can thus appear in several segments outside track slots. Tracking of records to locate records least recently used is done globally over all track slots and record caching segments and locally within individual record caching segments. A mechanism is provided for identifying record slots, as they become least recently used, and dropping them from the upper level store in the face of competing demands for the space. A second mechanism identifies least recently used segments for dropping from the upper level store.