摘要:
A system interface having: a plurality of front end directors adapted for coupling to a host computer/server; a plurality of back end directors adapted for coupling to a bank of disk drives; a data transfer section having cache memory; a cache memory manager; and, a message network. The cache memory is coupled to the plurality of front end and back end directors. The messaging network operates independently of the data transfer section and is coupled to the plurality of front end and back end. The front end and back end directors control data transfer between the host computer/server and the bank of disk drives in response to messages passing between the front end directors and the back end directors through the messaging network to facilitate data transfer between host computer/server and the bank of disk drives. The data passes through the cache memory in the data transfer section as such data passes between the host computer and the bank of disk drives. The system includes a cache memory manager having therein a memory for storing a map maintaining a relationship between data stored in the cache memory and data stored in the disk drives. The cache memory manager provides an interface between the host computer, the bank of disk drives and the cache memory for determining for the directors whether data to be read from the disk drives, or data to be written to the disk drives, resides in the cache memory. With such an arrangement, the cache memory in the data transfer section is not burdened with the task of transferring the director messaging but rather a messaging network is provided, operative independent of the data transfer section, for such messaging thereby increasing the operating bandwidth of the system interface. Further, the cache memory is no longer burdened with the task of evaluating whether data to be read from the disk drives, or data to be written to the disk drives, resides in the cache memory. The cache memory manager, plurality of front end directors, plurality of back end directors and cache memory are interconnected through a packet switching network.
摘要:
A high availability computer system and methodology including a backplane, having at least one backplane communication bus and a diagnostic bus, a plurality of motherboards, each interfacing to the diagnostic bus. Each motherboard also includes a memory system including main memory distributed among the plurality of motherboards and a memory controller module for accessing said main memory interfacing to said motherboard communication bus. Each motherboard also includes at least one daughterboard, detachably connected to thereto. The motherboard further includes a backplane diagnostic bus interface mechanism interfacing each of the motherboards to the backplane diagnostic bus; a microcontroller for processing information and providing outputs and a test bus controller mechanism including registers therein. The system further includes a scan chain that electrically interconnects functionalities mounted on each motherboard and each of the at least one daughter board to the test bus controller; and an applications program for execution with said microcontroller. The applications program including instructions and criteria to automatically test the functionalities and electrical connections and interconnections, to automatically determine the presence of one or more faulted components and to automatically functionally remove the faulted component(s) from the computer system. Also featured is a balanced clock tree circuit that automatically and selectively supplies certain clock pulses to the logical flip/flops of an ASIC. The system further includes redundant clock generation and distribution circuitry that automatically fails to the redundant clock circuitry in the event of a failure of the normal clock source.
摘要:
A queuing system wherein at least one input/output (I/O) interface having an outbound queue. A plurality of processing units is coupled to the at least one I/O interface. Each one of the processing units is coupled to a corresponding processing unit memory. Each one of the processing unit memories has an inbound queue for such coupled processing unit. The at least one I/O interface outbound queue stores outbound information being returned to the I/O interface after being processed by one of the processing units. The I/O interface creates queue indices for storage in the inbound queues of the processor unit memories. The I/O interface includes a translation table, such table storing at a location a producer index for the plurality of processing units and a consumer index for such plurality of processing units.
摘要翻译:一种排队系统,其中至少一个具有出站队列的输入/输出(I / O)接口。 多个处理单元耦合到至少一个I / O接口。 每个处理单元耦合到对应的处理单元存储器。 每个处理单元存储器具有用于这种耦合处理单元的入站队列。 所述至少一个I / O接口出站队列在由一个处理单元处理之后存储返回到I / O接口的出站信息。 I / O接口创建用于存储在处理器单元存储器的入站队列中的队列索引。 I / O接口包括转换表,该表存储在多个处理单元的生产者索引的位置以及用于这些多个处理单元的消费者索引。
摘要:
A data processing system having separate kernel, vertical and horizontal microcode, separate loading of vertical microcode and a permanently resident kernel microcode, and a soft console with dual levels of capability. The system includes a processor having dual ALC and microcode processors, and an instruction processor. Also included are a processor incorporating a multifunction processor memory, a multifunction nibble shifter, and a high speed look-aside memory control. Adaptive microcode control means 272 are disclosed in which microinstruction sequencing is a function 273 of the current microinstruction and current machine state.
摘要:
A data processing system which handles thirty-two bit logical addresses which can be derived from either sixteen bit logical addresses or thirty-two bit logical addresses, the latter being translated into physical addresses by unique translation means. The system includes means for decoding macro-instructions of both a basic and an extended instruction set, each macro-instruction containing in itself selected bit patterns which uniquely identify which type of instruction is to be decoded. The decoded macro-instructions provide the starting address of one or more micro-instructions, which address is supplied to a unique micro-instruction sequencing unit which appropriately decodes a selected field of each micro-instruction to obtain each successive micro-instruction. The system uses hierarchical memory storage using eight storage segments (rings), access to the rings being controlled in a privileged manner according to different levels of privilege. The memory system uses a bank of main memory modules which interface with the central processor system via a dual port cache memory, block data transfers between the main memory and the cache memory being controlled by a bank controller unit.
摘要:
A data storage system having protocol controller for converting packets between PCIE format used by a storage processor and Rapid IO format used by a packet switching network. The controller includes a PCIE end point for transferring atomic operation (DSA) requests, a data pipe section having a plurality of data pipes for passing user data; and a message engine section for passing messages among the plurality of storage processors. An acceleration path controller bypasses a DSA buffer in the absence of congestion on the network. Packets fed to the PCIE end point include an address portion having code indicating an atomic operation. An encoder converts the code from a PCIE format into the same atomic operation in SRIO format. Each one of a plurality of CPUs is adapted to perform a second DSA request during execution of a first DSA request.
摘要:
A very fast, memory efficient, highly expandable, highly efficient CCNUMA processing system based on a hardware architecture that minimizes system bus contention, maximizes processing forward progress by maintaining strong ordering and avoiding retries, and implements a full-map directory structure cache coherency protocol. A Cache Coherent Non-Uniform Memory Access (CCNUMA) architecture is implemented in a system comprising a plurality of integrated modules each consisting of a motherboard and two daughterboards. The daughterboards, which plug into the motherboard, each contain two Job Processors (JPs), cache memory, and input/output (I/O) capabilities. Located directly on the motherboard are additional integrated I/O capabilities in the form of two Small Computer System Interfaces (SCSI) and one Local Area Network (LAN) interface. The motherboard includes main memory, a memory controller (MC) and directory DRAMs for cache coherency. The motherboard also includes GTL backpanel interface logic, system clock generation and distribution logic, and local resources including a micro-controller for system initialization. A crossbar switch connects the various logic blocks together. A fully loaded motherboard contains 2 JP daughterboards, two PCI expansion boards, and up to 512 MB of main memory. Each daughterboard contains two 50 MHz Motorola 88110 JP complexes, having an associated 88410 cache controller and 1 MB Level 2 Cache. A single 16 MB third level write-through cache is also provided and is controlled by a third level cache controller.
摘要:
A data processing system which includes a floating point computation unit (FPU) which interfaces with a central processing unit (CPU) in which the CPU supplies a dispatch control signal to inform the FPU that it is about to execute a floating point macroinstruction and supplies a dispatch address which includes the starting address of the floating point microinstructions therefor during the same operating cycle that the dispatch control signal is supplied. A buffer memory is provided in the FPU to store the starting address of one decoded macroinstruction while a sequence of microinstructions for a previously decoded macroinstruction is being executed by the FPU. When the buffer already has a starting address resident in its buffer the FPU supplies a control signal to prevent the CPU from supplying a further dispatch address until the buffer is empty. Other control signals for synchronizing the CPU and FPU operations and data transfers are also provided.
摘要:
A multiprocessor computing system is disclosed which includes a system bus, a plurality of processing units and a plurality of synchronous input/output channel controllers. A plurality of priority lines each corresponding to a processing unit are provided through each input/output channel controller in order of priority. A synchronizing signal is generated at the same time in each input/output channel controller in response to the end of an address phase on the system bus. A latch is provided in the input/output controllers which responds to the synchronizing signal by storing the condition of the priority lines and whether an interrupt is pending. In response to a broadcast interrupt origin request instruction from a processing unit, all input/output channel controllers will respond at the same time but only the one with the priority interrupt for the requesting processing unit gives a non-zero response.
摘要:
A data storage system having protocol controller for converting packets between PCIE format used by a storage processor and Rapid IO format used by a packet switching network. The controller includes a PCIE end point for transferring atomic operation (DSA) requests, a data pipe section having a plurality of data pipes for passing user data; and a message engine section for passing messages among the plurality of storage processors. An acceleration path controller passes a DSA buffer in the absence of congestion on the network. Packets fed to the PCIE end point include an address portion having code indicating an atomic operation. An encoder converts the code from a PCIE format into the same atomic operation in SRIO format. Each one of a plurality of CPUs is adapted to perform a second DSA request during execution of a first DSA request.