摘要:
A distributed data processing system includes a plurality of nodes interconnected by bidirectional communication links. Each node includes a control message line for handling of control messages and a control memory for storing the control messages. Each node further includes data message line for handling of data messages and a data memory for storing the data messages. A processor in the node causes the data message line to queue and dispatch data messages from the data memory and the control message line to queue and dispatch control messages from the control memory. Each node includes N bidirectional communication links enabling the node to have at least twice as much input/output bandwidth as the control message line and data message line, combined. An input/output switch includes a routing processor and is coupled between the N bidirectional communication links, the data message line and control message line. The input/output switch dispatches either a control message or a data message over at least one of the bidirectional communication links in accordance with an output from the routing control processor, thereby enabling each communication link to carry either data or control messages. If a communication link is busy with either a control or a data message, the routing control processor increments to another communication link to enable dispatch of a queued message.
摘要:
A procedure controls execution of priority ordered tasks in a multi-nodel data processing system. The data processing system includes a node with a software-controlled processor and a hardware-configured queue-controller. The queue-controller includes a plurality of priority-ordered queues, each queue listing tasks having an assigned priority equal to a priority order assigned to the queue. The queue-controller responds to a processor generated order to queue a first task for execution, by performing a method which includes the steps of: listing said first task on a first queue having an assigned priority that is equal to a priority of said first task; if a second task is listed on a queue having a higher assigned priority, attempting execution of the second task before execution of the first task; if no tasks are listed on a queue having a higher assigned priority than said first queue, attempting execution of a first listed task in the first queue means; and upon completion of execution of the task or a stalling of execution of the task, attempting execution of a further task on the first queue only if another order has not been issued to place a task on a queue having a higher assigned priority. The method further handles chained subtasks by attempting execution of each subtask of a task in response to the processor generated order; and if execution of any subtask does not complete, attempting execution of another task in lieu of a subtask chained to the subtask that did not complete.
摘要:
An apparatus for dynamically allocating memory includes a processor, a free buffer pool memory and a control memory which stores control block data structures. The control block data structures enable a segmentation of the free buffer pool memory into a series of free buffer pools, each free buffer pool comprising plural identical size buffers, each succeeding free buffer pool including a larger buffer size than a preceding free buffer pool. A selection size parameter for a given free buffer pool is a value that is larger than the buffer size comprising the given free buffer pool, but less than a next larger buffer size in the next of the series of free buffer pools. A memory allocation procedure responds to a request from an executing procedure for allocation of buffer space by: (i) allocating a buffer from a free buffer pool memory whose associated selection size parameter is a next larger value than the buffer space that was requested; (ii) determining a difference between the allocated buffer size and the requested buffer space to find an unfulfilled amount of the requested buffer space; (iii) allocating a buffer from a free buffer pool memory whose selection size parameter is a next larger value, among selection size parameters, than the unfulfilled amount; and (iv) repeating ii and iii until the memory allocation procedure determines that there is no unfulfilled amount of the requested buffer space. The apparatus further includes "quickcell" memory which is allocated without use of control block data structures.
摘要:
A method enables a host processor, which employs variable length (VL) records, to communicate with disk storage which employs fixed length (FL) sectors for storage of the VL records. The method comprises the steps of: a) deriving a first control data structure for an update VL record, the first control data structure including information describing segments of the update VL record; b) determining a disk track that includes a FL sector wherein am old VL record commences that corresponds to the update VL record; c) reading each FL sector in the disk track and creating a control data structure which includes information describing each VL record stored in the disk track; d) substituting in a control data structure for the old VL record that corresponds to the update VL record, information regarding update data from the first control data structure; e) recording in the disk track, data indicated by each control data structure determined in steps c) and d); and f) if the old VL record ends at other than a sector break of a FL sector, reblocking VL records into FL sectors which are recorded thereafter on the disk track. The invention also enables a read action to be accomplished in one rotation of a disk even though it commences at a FL sector that is not at the beginning of a VL record to be accessed.
摘要:
A method enables a host processor, which employs variable length (VL) records, to transparently communicate with disk storage which employs fixed length (FL) sectors for storage of the VL records. The method comprises the steps of: a) deriving a first control data structure for an update VL record, the first control data structure including information describing segments of the update VL record; b) determining an FL sector wherein an old VL record commences that corresponds to the update VL record; c) if the old VL record commences at other than a sector break of the FL sector, deriving a second control data structure for a portion of a prior VL record that immediately precedes the old VL record and a third control data structure for the old VL record; d) substituting in the third control data structure, information regarding update segments of the update VL record from the first control data structure; and recording in the FL sector determined in c), data indicated by the second control data structure and at least a portion of the update VL record, through use of the third control structure as altered in d).
摘要:
Conflicts are resolved between competing nodes in a multi-node communications network. After a first node in the network requests an initiation of communications with a target node, the requesting node may simply initiate the requested communications with the target node if the target node is not busy. If the first node determines that the target node is busy, it proceeds to resolve the conflict. Namely, the first node repeats the process of waiting for a first delay then requesting initiation of communications with the target node. After each unsuccessful attempt, the first delay is successively increased. As an example, the delay may be increased exponentially, with a controlled randomness added. After a or more queued messages to other nodes. Following this, the first node performs another sequence to initiate communications with the target node, successively increasing the delay between unsuccessful attempts, as before. After a predetermined number of unsuccessful passes through the foregoing routine, the first node proceeds to take appropriate action, such as initiating an error recovery routine, sending the message via different hardware components, or issuing an error message.
摘要:
A multi-nodal data processing system includes a plurality of processing nodes, each node connected to plural other nodes by bidirectional data links. Each node comprises receivers for receiving messages on bidirectional data links and transmitters for transmitting messages on bidirectional data links. Each node records child nodes to which a message was transmitted and is further adapted to transmit a lock-up message received from a child node to a parent node, the lock-up message indicating a successful establishment of a message signal path to a destination node. Each node further is adapted to transmit a link cancel signal to another node to close the link in the event of an unsuccessful message transfer attempt over the link. Each node inhibits transmission of a lock-up signal to a parent node until link cancel signals have been received from all child nodes (other than a node from which a lock-up signal was received). A source node (where a message originates) continues transmission of its message, even before a lock-up signal has been received. The destination node which originates the lock-up message terminates a bidirectional data link by an end-of-session signal when it has received an entire message.
摘要:
A computing system includes plural nodes that are connected by a communications network. Each node comprises a communications interface that enables an exchange of messages with other nodes. A ready queue is maintained in a node and includes plural message entries, each message entry indicating an output message control data structure. The node further includes memory for storing plural output message control data structures, each including one or more chained further monrtol data structures that define data comprising a message or a portion of a message that is to be dispatched. Control data structures that are chained from an output messsage control data structure exhibit a sequence dependincy. A processor is controlled by the ready queue and enables dispatch of portions of the message designated by an output message control data structure and associated further control structures. The processor prevents dispatch of one portion of a message prior to dispatch of another portion of the message upon which the first portion is dependent even if message transmissions are interrupted.
摘要:
A system that enables pipelining of data to and from a memory includes multiple control block data structures which indicate amounts of data stored in the memory. An input port device receives and stores in memory, data segments of a received data message and only updates status information in the software control blocks when determined quantities of the data segments are stored. An output port is responsive to a request for transmission of a portion of the received data and to a signal from the input port that at least a first control count of data segments of the received data are present in memory. The output port then outputs the stored data segments from memory but discontinues the action if, before the required portion of the received data is outputted, software control blocks indicate that no further stored data segments are available for outputting. The input port then updates the software control blocks when newly arrived and stored data segments reach a second control count value, the updating occurring irrespective of whether the determined quantity of the received data has been stored in memory.
摘要:
A multi-node data processing system implements a method that assures that plural messages are enabled "fair" access to a data stream. Each node includes apparatus for controlling message transmissions and/or receptions from another node over a communication network. The method comprises the steps of: transmitting a routing message from a first destination node to a source node, the routing message signalling a readiness of the destination node to receive a data message; transmitting a first data message to the first destination node from the source node in response to the ready message; transmitting a conditional disconnect message from the first destination node to the source node upon receipt of a predetermined amount (i.e. a "slice") of the first data message. The source node responds to the conditional disconnect message by either (1) disconnecting from the first destination node, and commencing transmission of a slice of a second data message to a second destination node if during transmission of the slice of the first data message, the source node has received a ready message from the second destination node; or (2) continuing transmission of the data message to the first destination node until message end or, following the procedure in (1) if a new ready message is received by the source node from a further destination node, whichever occurs first.