Abstract:
A memory controller reads data from a memory bank of synchronous RAM during a small and variable data valid window, by compensating for delays in receiving the data caused by memory loading, chip and card manufacturing process variations, and the like. The memory controller includes a system clock driver to supply the memory bank with a clock reference signal. A sampling clock provides an assortment of sampling clock signals duplicative of the system clock signal, with various delays. A command driver initiates Read operations in the memory bank by relaying Read command signals to the memory bank. In response to the level of memory loading, such as the number of memory modules present in the memory bank, a clock selector directs a selected one of the sampling clock signals to a delay module, which replicates any delay the system clock driver may have. If desired, an additional, user-selectable supplementary delay unit may be used to increase the delay provided by the delay module, thereby increasing or offsetting the delay of the selected sampling clock signal. The delay module provides a delayed clock signal to synchronize receipt of Read data signals from the memory bank at a clocked latch, enabling the latch to receive the Read data signals during the appropriate data valid window. Specifically, the latch is activated by receipt of Read command signals, which may be coordinated, for example, with the rising edge of the delayed clock signal. The latched Read data signals are then available for use by other logic circuitry.
Abstract:
An access control or arbitrator for a shared resource, such as a time-slotted bus, groups requests according to priorities of the requests. The time slots are grouped into sets, each set having a number of successive time slots equal to the number of sources supplying access requests having a highest priority. In a highest priority group, each source supplying a highest priority access request is guaranteed access in respective ones of said time slots in each set of time slots. When any time slot is not being used by a high priority request, low priority requests then have access to the unused time slot. Lower priority groups of access requests are handled in accordance with a different algorithm, such as a round robin priority algorithm.
Abstract:
A system that enables pipelining of data to and from a memory includes multiple control block data structures which indicate amounts of data stored in the memory. An input port device receives and stores in memory, data segments of a received data message and only updates status information in the software control blocks when determined quantities of the data segments are stored. An output port is responsive to a request for transmission of a portion of the received data and to a signal from the input port that at least a first control count of data segments of the received data are present in memory. The output port then outputs the stored data segments from memory but discontinues the action if, before the required portion of the received data is outputted, software control blocks indicate that no further stored data segments are available for outputting. The input port then updates the software control blocks when newly arrived and stored data segments reach a second control count value, the updating occurring irrespective of whether the determined quantity of the received data has been stored in memory.
Abstract:
A distributed data processing system includes a plurality of nodes interconnected by bidirectional communication links. Each node includes a control message line for handling of control messages and a control memory for storing the control messages. Each node further includes data message line for handling of data messages and a data memory for storing the data messages. A processor in the node causes the data message line to queue and dispatch data messages from the data memory and the control message line to queue and dispatch control messages from the control memory. Each node includes N bidirectional communication links enabling the node to have at least twice as much input/output bandwidth as the control message line and data message line, combined. An input/output switch includes a routing processor and is coupled between the N bidirectional communication links, the data message line and control message line. The input/output switch dispatches either a control message or a data message over at least one of the bidirectional communication links in accordance with an output from the routing control processor, thereby enabling each communication link to carry either data or control messages. If a communication link is busy with either a control or a data message, the routing control processor increments to another communication link to enable dispatch of a queued message.
Abstract:
A method enables a host processor, which employs variable length (VL) records, to communicate with disk storage which employs fixed length (FL) sectors for storage of the VL records. The method comprises the steps of: a) deriving a first control data structure for an update VL record, the first control data structure including information describing segments of the update VL record; b) determining a disk track that includes a FL sector wherein am old VL record commences that corresponds to the update VL record; c) reading each FL sector in the disk track and creating a control data structure which includes information describing each VL record stored in the disk track; d) substituting in a control data structure for the old VL record that corresponds to the update VL record, information regarding update data from the first control data structure; e) recording in the disk track, data indicated by each control data structure determined in steps c) and d); and f) if the old VL record ends at other than a sector break of a FL sector, reblocking VL records into FL sectors which are recorded thereafter on the disk track. The invention also enables a read action to be accomplished in one rotation of a disk even though it commences at a FL sector that is not at the beginning of a VL record to be accessed.
Abstract:
A multi-node data processing system implements a method that assures that plural messages are enabled "fair" access to a data stream. Each node includes apparatus for controlling message transmissions and/or receptions from another node over a communication network. The method comprises the steps of: transmitting a routing message from a first destination node to a source node, the routing message signalling a readiness of the destination node to receive a data message; transmitting a first data message to the first destination node from the source node in response to the ready message; transmitting a conditional disconnect message from the first destination node to the source node upon receipt of a predetermined amount (i.e. a "slice") of the first data message. The source node responds to the conditional disconnect message by either (1) disconnecting from the first destination node, and commencing transmission of a slice of a second data message to a second destination node if during transmission of the slice of the first data message, the source node has received a ready message from the second destination node; or (2) continuing transmission of the data message to the first destination node until message end or, following the procedure in (1) if a new ready message is received by the source node from a further destination node, whichever occurs first.
Abstract:
An interconnection network comprises a pair of backplanes for receiving X pluggable node cards. The pair of backplanes include X backplane connector groups, each backplane connector group adapted to receive mating connectors from a pluggable node card. Each backplane connector group includes X/2 connectors. A first backplane includes first permanent wiring which interconnects a first subset of pairs of connectors between backplane connector groups. A second backplane includes second permanent wiring which interconnects a second subset of pairs of connectors between backplane connector groups. The first permanent wiring and second permanent wiring connect complementary subsets' of pairs of the connectors. A plurality of node cards, each including a card connector group, pluggably mate with the backplane connector groups. Each node card further includes a frontal connector that is adapted to receive a cable interconnection. Each node card includes a processor and a switch module which simultaneously connects the processor to at least plural connectors of a backplane connector group.
Abstract:
A method enables a host processor, which employs variable length (VL) records, to transparently communicate with disk storage which employs fixed length (FL) sectors for storage of the VL records. The method comprises the steps of: a) deriving a first control data structure for an update VL record, the first control data structure including information describing segments of the update VL record; b) determining an FL sector wherein an old VL record commences that corresponds to the update VL record; c) if the old VL record commences at other than a sector break of the FL sector, deriving a second control data structure for a portion of a prior VL record that immediately precedes the old VL record and a third control data structure for the old VL record; d) substituting in the third control data structure, information regarding update segments of the update VL record from the first control data structure; and recording in the FL sector determined in c), data indicated by the second control data structure and at least a portion of the update VL record, through use of the third control structure as altered in d).
Abstract:
A common macro interface between chips that have design features in common and communicate with each other. The common macro interface (CMI) uses VHDL (VHSIC Hardware Description Language) which is the industry standard hardware design language. A common protocol is provided to resolve communication problems and comprises four signals: request; acknowledge request; data acknowledge, and read/write. A freeway system within the interfaces facilitates parallel and pipelining processes and an arbiter (also called a scheduler) is placed in front of every slave resource to control the traffic independently and to avoid traffic collisions from locking the freeway. The freeway is unique for each integrated circuit. Accordingly, macros may be moved from chip to chip without requiring complete system modifications and the effort involved in designing macros common to several chips may be shared.
Abstract:
In a programmed machine, such as an peripheral controller, programmed operations are executed in a one of several operational contexts. Each context may be initiated by a corresponding interruption signal. Any context which has been activated remains active until quiesced by program execution. One of the active contexts is a current context in which all instruction executions are currently occurring. In each cycle of the programmed machine, all active contexts and received and stored interruption signals, each for respective ones of the contexts, are compared to find the context highest priority context. Such highest priority context is compared with the current context priority for determining whether or not the programmed machine should change current contexts.