摘要:
The logic cards for a main storage unit or computer logic which receive request operations for access to portions of the memory or logic are divided into banks or elements. When a request operation attempts to access one of the elements a return busy signal is raised from that element. The present invention structure generates a predicted busy signal which occurs during the same time the return busy signal should be activated or operable. The return busy signal and predict busy signal are compared in novel circuitry to verify that the element performing the operaton is in fact performing an operation during the predetermined time slot allowed for performance of the requested operation. Fault signals for bank invalidation are stored in internal check trap circuitry for future reference when the requestor raises a subsequent request operation.
摘要:
A dynamic random access memory (DRAM) memory refreshing scheme utilizes at least two separate refresh channels. Each of the channels consists of a pair of identical counters which are coupled through two different types of timing chains. One of the timing chains is associated with one of the counters and generates a refresh request signal, while the other timing channel generates a refresh error signal. As long as the refresh error signal matches the refresh request signal, no error is present, and a validated refresh request signal will be generated from that timing channel and supplied to an OR gate to refresh all of the memory banks for the memory. Whenever a mismatch occurs between the refresh error signal and the refresh request for one of the refresh channels, the validated refresh request signal for that channel will be inoperable, and continued refreshing operation of the memory depends on the supply of the validated refresh request signals through the other channel in which the refresh request signal and the refresh error signals still match.
摘要:
A dual priority hold register enables the transfer of data to memory ports having serial priority in accordance with two stages of priority. First, all latches of a high priority sector of the register are cleared. Then, the highest priority latch of the low priority sector of the register is cleared, while the latches of the higher priority register are loaded with further data. Following clearance of the low priority latch, all latches of the higher priority register are cleared once again, followed by clearance of the next highest priority latch of the lower priority register sector while the higher priority register is loaded once again. The sequence is repeated until both the higher and lower priority sectors of the register are clear.
摘要:
A bus architecture includes address lines, data lines, and control signals to allow a processor to communicate with a VLSI gate array. The address lines are interpreted by the VLSI gate array to select either multi-bit registers or single bit designators resident on the VLSI gate array depending on which control signal is received from the processor. Dual address decode logic on the VLSI gate array senses control signals indicating a request to read from a register, write to a register, and set, clear, or test a designator, and decodes the address received to select the appropriate storage location for the requested function.
摘要:
A bus architecture and associated circuitry for providing communication between processors and multiple gate arrays whereby the size of the data being transferred may be either full words of 32-bits or 36-bits per word, or half words of 16-bits or 18-bits per word. Parity generation logic operates on the data to be sent over the bus to generate a parity value from the correct data bits depending on the selected data word size. Parity checking logic operates on the data received from the bus to check the parity of the correct data bits depending on the selected data word size.
摘要:
A method and apparatus for detecting stuck faults in a signal line used to communicate a branch condition for executing conditional branch instructions by a data processing system containing a programmable microprocessor and multiple VLSI gate arrays connected by a bi-directional bus, whereby the branch condition is obtained from a storage location resident on a VLSI gate array executing asynchronous and external to the microprocessor. The branch condition is fetched and evaluated in parallel with the fetching of the branch target address and the incrementing of the program counter. The microprocessor changes instruction sequence control depending on the results of the branch condition evaluation. The branch condition is sent to the microprocessor as a signal pulse for a specified duration at a particular time, rather than by changing the level of the signal, thereby allowing communication of the branch condition over only one signal line but still providing for detection of faults in the VSLI gate array or faults inherent in the signal line.
摘要:
A system and method for updating partial blocks of file data stored in a non-volatile storage within a file cache system connected to a host computer system. A first buffer and a last buffer receive from the non-volatile storage the existing portions of the blocks that are to be retained. A write buffer receives new data of a size not equal to an integral multiple of a block from a host computer system. The new data is merged under hardware control with the existing portions contained in the first buffer and the last buffer, thereby updating the cached file.
摘要:
A counter system having associated counter error detection circuitry that utilizes the current parity, the previous parity, and a predicted parity for evaluating counter operation is described. In successive count cycles, a predicted parity is utilized, during the next subsequent count cycle is stored in flip-flop as the current parity, and in the next subsequent count cycle is stored a second flip-flop as a previous parity. Circuit are described for performing parity check and parity prediction functions. The previous parity, current parity and predicted parity will not be alike for any binary counter that operates properly. Circuity is described that holds and compares the parity of the Count, the current parity, and the previous parity, during each counter advance cycle and to provide an error signal when the counter is detected to be stuck.
摘要:
A high performance pipelined virtual first-in first-out stack structure having a data stack portion and a split control stack portion is described. The stack structure is intended for use in a pipelined high performance storage unit that can pipeline up to R input requests without having received an acknowledge that a request has been honored. The data stack incorporates R+1 data stack registers to provide over-write protection to ensure that at least R data stack registers are protected from over-write. The split control stack utilizes even address and odd address stack registers. Memory bank request signals are stored sequentially and alternately between the even address and odd address stack registers. An even address read pointer and an odd address read pointer under control of a read pointer control circuit alternates the selection for read out sequentially between the even address and odd address stack registers such that decoding of the memory bank request signals for the next reference can be interleaved with completion of the decoding and prioritization of the current stack register. Advancement of stack register addresses at which writing will take place is under control of a reguest signal. Control of the read pointers for the data stack and the split control stack are responsive to bank acknowledge signals received by the read pointer control circuits.
摘要:
An apparatus for efficiently detecting errors in a system having a plurality of memory devices. The present invention uses a single parity bit configuration to detect common data errors caused by faulty memory devices including multiple data errors within one memory device. This is accomplished by effectively turning a multiple bit error detection situation into a single bit error detection situation. Thus, instead of allocating a contiguous block of bits to the same memory unit, the present invention allocates bits across all memory units in a round-robin fashion. The parity domains are defined such that multiple errors within one SRAM can be detected despite only using a single bit parity configuration.