摘要:
A multi-nodal computing system is connected by a communication network. A first node of the multi-nodal system includes apparatus for transmitting an information transfer request to a second node, the request including identification data that the second node can use to access the selected information. The second node includes memory for storing the requested information and a message output control structure. A processor is responsive to received identification data from the first node to access selected information that is defined by the data. The processor is further responsive to the information transfer request to insert the identification data received from the first node directly into a message output control structure. The processor then initiates an output operation by employing the identification data in the message output control data structure to access the identified information and to communicate the information to the first node. In such manner, no processor interrupt is required (with software intervention) to enable the requested information to be transferred, as pointers to that information are already included in the message output control structure, with the output mechanism in the second node employing that output control structure to access and transmit the requested information.
摘要:
A system that enables pipelining of data to and from a memory includes multiple control block data structures which indicate amounts of data stored in the memory. An input port device receives and stores in memory, data segments of a received data message and only updates status information in the software control blocks when determined quantities of the data segments are stored. An output port is responsive to a request for transmission of a portion of the received data and to a signal from the input port that at least a first control count of data segments of the received data are present in memory. The output port then outputs the stored data segments from memory but discontinues the action if, before the required portion of the received data is outputted, software control blocks indicate that no further stored data segments are available for outputting. The input port then updates the software control blocks when newly arrived and stored data segments reach a second control count value, the updating occurring irrespective of whether the determined quantity of the received data has been stored in memory.
摘要:
A computing system includes plural nodes that are connected by a communications network. Each node comprises a communications interface that enables an exchange of messages with other nodes. A ready queue is maintained in a node and includes plural message entries, each message entry indicating an output message control data structure. The node further includes memory for storing plural output message control data structures, each including one or more chained further monrtol data structures that define data comprising a message or a portion of a message that is to be dispatched. Control data structures that are chained from an output messsage control data structure exhibit a sequence dependincy. A processor is controlled by the ready queue and enables dispatch of portions of the message designated by an output message control data structure and associated further control structures. The processor prevents dispatch of one portion of a message prior to dispatch of another portion of the message upon which the first portion is dependent even if message transmissions are interrupted.
摘要:
A procedure controls execution of priority ordered tasks in a multi-nodel data processing system. The data processing system includes a node with a software-controlled processor and a hardware-configured queue-controller. The queue-controller includes a plurality of priority-ordered queues, each queue listing tasks having an assigned priority equal to a priority order assigned to the queue. The queue-controller responds to a processor generated order to queue a first task for execution, by performing a method which includes the steps of: listing said first task on a first queue having an assigned priority that is equal to a priority of said first task; if a second task is listed on a queue having a higher assigned priority, attempting execution of the second task before execution of the first task; if no tasks are listed on a queue having a higher assigned priority than said first queue, attempting execution of a first listed task in the first queue means; and upon completion of execution of the task or a stalling of execution of the task, attempting execution of a further task on the first queue only if another order has not been issued to place a task on a queue having a higher assigned priority. The method further handles chained subtasks by attempting execution of each subtask of a task in response to the processor generated order; and if execution of any subtask does not complete, attempting execution of another task in lieu of a subtask chained to the subtask that did not complete.
摘要:
An apparatus for dynamically allocating memory includes a processor, a free buffer pool memory and a control memory which stores control block data structures. The control block data structures enable a segmentation of the free buffer pool memory into a series of free buffer pools, each free buffer pool comprising plural identical size buffers, each succeeding free buffer pool including a larger buffer size than a preceding free buffer pool. A selection size parameter for a given free buffer pool is a value that is larger than the buffer size comprising the given free buffer pool, but less than a next larger buffer size in the next of the series of free buffer pools. A memory allocation procedure responds to a request from an executing procedure for allocation of buffer space by: (i) allocating a buffer from a free buffer pool memory whose associated selection size parameter is a next larger value than the buffer space that was requested; (ii) determining a difference between the allocated buffer size and the requested buffer space to find an unfulfilled amount of the requested buffer space; (iii) allocating a buffer from a free buffer pool memory whose selection size parameter is a next larger value, among selection size parameters, than the unfulfilled amount; and (iv) repeating ii and iii until the memory allocation procedure determines that there is no unfulfilled amount of the requested buffer space. The apparatus further includes "quickcell" memory which is allocated without use of control block data structures.
摘要:
In a multinode communication or multiprocessor network, messages are communicated from one node to another using an adaptive and dynamic routing scheme. The routing scheme includes two-level multi-path routing tables at each node to ensure efficient delivery of the messages. An entry in the level-1 table identifies a group of nodes and entry in the level-2 table identifies the address for each node within that group. The routing scheme also includes a deflection counter in each message header to avoid endless rerouting of messages and an exponential backoff and retry policy to avoid deadlocks.
摘要:
A multi-node data processing system implements a method that assures that plural messages are enabled "fair" access to a data stream. Each node includes apparatus for controlling message transmissions and/or receptions from another node over a communication network. The method comprises the steps of: transmitting a routing message from a first destination node to a source node, the routing message signalling a readiness of the destination node to receive a data message; transmitting a first data message to the first destination node from the source node in response to the ready message; transmitting a conditional disconnect message from the first destination node to the source node upon receipt of a predetermined amount (i.e. a "slice") of the first data message. The source node responds to the conditional disconnect message by either (1) disconnecting from the first destination node, and commencing transmission of a slice of a second data message to a second destination node if during transmission of the slice of the first data message, the source node has received a ready message from the second destination node; or (2) continuing transmission of the data message to the first destination node until message end or, following the procedure in (1) if a new ready message is received by the source node from a further destination node, whichever occurs first.
摘要:
A data processing system includes a software interrupt handler which controls performance of interrupt actions. The system further includes plural subsystems, each subsystem manifesting an interrupt request upon occurrence of an associated event. Hardware is provided which responds to an interrupt request by issuing an order to construct an interrupt status block (ISB) control data structure with a determined priority ranking. A controller is responsive to the issued order and constructs the ISB data structure. The ISB at least includes a pointer value indicating a next ISB having a same priority ranking, interrupt data identifying an interrupt procedure to be used by the software interrupt handler and information indicating a source of the interrupt request. The controller arranges the ISB in a queue of ISB's having a same determined priority and signals the software interrupt handler to commence performance of an interrupt action only if the order issued by the hardware requires an immediate interrupt. In such case, the software interrupt handler responds by reading contents of the ISB and performing operations in accordance with that data. Otherwise, normal processing resumes. Under normal circumstances, the controller, in executing an interrupt, is not required to inquire of the subsystem which manifested the interrupt request.
摘要:
A high-performance clock synchronizer for transferring digital data across the asynchronous boundary between two independent clock domains operating at hardware-limited clock speeds. The external clock signal latches each incoming data word in a boundary register. An external clock divider produces several prolonged clock signals synchronized to the external clock signal for use in distributing the incoming data words into a bank of several external buffer registers, where each word stabilizes for more than one full internal clock interval before transfer across the asynchronous boundary to a bank of corresponding internal buffer registers synchronized to the internal clock signal. A special logic inserts and deletes pad words to equalize data flow rates. Another special logic reassembles the data words in proper sequence after transfer to the internal buffer register bank. Flag latches are used to avoid asynchronous sampling of more than one bit in each data word.
摘要:
A distributed data processing system includes a plurality of nodes interconnected by bidirectional communication links. Each node includes a control message line for handling of control messages and a control memory for storing the control messages. Each node further includes data message line for handling of data messages and a data memory for storing the data messages. A processor in the node causes the data message line to queue and dispatch data messages from the data memory and the control message line to queue and dispatch control messages from the control memory. Each node includes N bidirectional communication links enabling the node to have at least twice as much input/output bandwidth as the control message line and data message line, combined. An input/output switch includes a routing processor and is coupled between the N bidirectional communication links, the data message line and control message line. The input/output switch dispatches either a control message or a data message over at least one of the bidirectional communication links in accordance with an output from the routing control processor, thereby enabling each communication link to carry either data or control messages. If a communication link is busy with either a control or a data message, the routing control processor increments to another communication link to enable dispatch of a queued message.