摘要:
In a processing system any response to an interrupt acknowledge cycle is deferred until the transfer of buffered data to be written from an agent on a subsystem I/O bus to main memory of the system is assured. To expedite system operation, data to be written to main memory by an agent on an I/O bus is buffered in an interface circuit. As soon as the data is buffered, the I/O bus agent is released and interrupts a processor on the system bus indicating completion of the data write. A tightly coupled interrupt controller is used so that the agent does not need to own the I/O or system bus to generate the interrupt. The interrupted processor issues an interrupt acknowledge (IAK) cycle on the system bus to receive an interrupt vector from the interrupt controller. The interface circuit recognizes the IAK cycle and generates a retry signal for the processor if buffered data remains in the interface circuit. In response to the retry signal, the processor is taken off the system bus and not allowed to regain the system bus until the buffered data is written to main memory. A bus busy signal is raised and will not be lowered until the data is written to main memory. When the busy signal is lowered, the processor regains the system bus and receives an interrupt vector from the interrupt controller. I/O bus ownership is locked until the interrupted processor has received an interrupt vector and the IAK cycle is complete. If no buffered data remains in the interface circuit, no retry signal is generated. The interrupt controller waits a predefined period of time for a retry signal and if none is detected, the interrupt controller issues an appropriate interrupt vector to complete the IAK cycle. For multiple I/O buses, preferably only one interface circuit retries processors issuing IAK cycles.
摘要:
A retry scheme for optimizing use of a first bus in a computer system which includes a plurality of bus masters connected through the first bus to an interface circuit and second bus. The interface circuit includes logic for generating a busy signal when the second bus is in a busy state and logic for generating a retry signal when the interface circuit is addressed by a bus master while the second bus is in a busy state. Each bus master includes logic for receiving the retry signal and relinquishing control of the common bus upon receipt of the retry signal from the interface circuit. A bus arbiter includes logic for receiving the busy signal and preventing any bus master seeking access to the second bus from participating in arbitration for control of the common first bus until the busy signal has been negated. Thus, during the term of the busy signal the first bus may be controlled by any bus master not requiring access to the shared resource. Upon negation of the busy signal, all bus masters will be permitted to compete for ownership of the bus.
摘要:
Multiple processor systems are configured to include at least two system or memory buses with at least two processors coupled to each of the system buses, and at least two I/O buses which are coupled to the system buses to provide multiple expansion slots hosting up to a corresponding number of I/O bus agents for the systems at the cost of a single system bus load for each I/O bus. Each of the system and I/O buses are independently arbitrated to define decoupled bus systems for the multiple processor systems of the present invention. Main memory for the systems is made up of at least two memory interleaves, each of which can be simultaneously accessed through the system buses. Each of the I/O buses are interfaced to the system buses by an I/O interface circuit which buffers data written to and read from the main memory or memory interleaves by I/O bus agents.
摘要:
Multiple subsystem I/O (Input/Output) buses are coupled to one or more system buses of a computer system by interface circuits which perform necessary decoding of memory space and I/O (Input/Output) space for allocation of portions of the memory space and the I/O (Input/Output) space to each I/O (Input/Output) bus. The interface circuits also translate fixed addresses within each I/O (Input/Output) bus to permit proper operation of the I/O (Input/Output) buses with the computer system. The interface circuits are programmed by the computer system to define the allocated memory spaces and I/O (Input/Output) spaces for the corresponding I/O (Input/Output) buses. Programming of the I/O (Input/Output) buses is performed at the time of system configuration by writing appropriate values into configuration registers incorporated into each of the interface circuits.
摘要:
A buffer management scheme for optimally configuring a data buffer within a computer system which includes a plurality of bus masters connected through a Micro Channel bus and the data buffer to a shared resource, such as memory. The scheme decodes unique four-bit Micro Channel arbitration values assigned to the bus masters to retrieve buffer configuration parameters stored within a register file containing different configuration parameters for each bus master. The data buffer is dynamically configured for optimal performance with each bus master having control of the Micro Channel bus in accordance with the parameter data retrieved from the register file.
摘要:
A hardware method to concurrently obtain memory access locality information for a large number of contiguous sections of system memory (pages) for the purposes of optimizing memory and process assignments in a multiple-node NUMA architecture computer system including a distributed system memory. Page access monitoring logic is included within each processing node which contains a portion of shared system memory. This page access monitoring logic maintains a plurality of page access counters, each page access counter corresponding to a different memory page address within the shared system memory. Whenever the processing node generates a transaction requiring access to a memory address within system memory, the page access monitoring logic increments a count value contained within the page access counter corresponding to the memory address to which access is sought. Thus, a record of memory access patterns is created which can be used to optimize memory and process assignments in the computer system.
摘要:
Disclosed is a tubular microporous prosthesis formed by rolling a flexible sheet around a longitudinal axis. Preferably, the prosthesis is self expandable under the radially outwardly directed spring bias of the rolled sheet. At least a portion of the sheet may be provided with a coating to affect the physical and/or biological properties of the prosthesis.
摘要:
A multiprocessor computing apparatus that includes a mechanism for favoring at least one processor over another processor to achieve more equitable access to cached data. Logic for detecting when, for example, a remote and a local processor are attempting to access data from the cache of another local processor is disclosed. Logic that provides an advantage to the remote processor in a manner that achieves fairer access among the various processors is also disclosed.
摘要:
A deadlock-avoidance system for a computer. In a multi-bus, multi-processor computer, one processor may request a lock on a bus, to execute a locked cycle, thereby blocking all other processors, and other agents, from access to the bus. In addition, a conflicting agent may, in effect, lock a resource which is needed by the processor to complete the cycle for which the lock was requested. These two locks can create a deadlock situation which stalls the computer: the processor and the conflicting agent have each locked a resource needed by the other. Under the invention, when a locked cycle is requested by a processor, all other operations are suspended in the computer. Then queues standing in memory controllers are emptied. If a process requested by an agent occupies a resource, such as a bridge, required by the requested locked cycle, that resource is freed. Then the locked cycle is executed.
摘要:
A system for booting a multi-processor computer. If a normal boot attempt fails, different processors are selected, one-at-a-time, for performing the boot routine. During the boot routing, all other processors are held inactive. After boot, processors are tested for health. Non-healthy processors are held inactive, and healthy processors are activated as usual.