摘要:
The link round trip delay between two switches in a Fibre Channel network may be determined by sending a particular timing signal value from an originating switch to a responding switch. The responding switch may store the timing signal value in an “echo” register for comparison to subsequently received timing signals. The originating switch may then send the pre-selected timing signal to the responding switch while simultaneously starting a timer. When the responding switch receives the timing signal, it may compare the value of the received signal to that stored in its echo register. If the value, is the same, the responding switch may retransmit—i.e., echo—the timing signal to the originating switch. When the originating switch receives the echoed timing signal, it may stop its timer and compute the link round trip delay time. The computed link round trip delay time between the originating switch and the responding switch may be advantageously used in fabric routing algorithms.
摘要:
An arbitration scheme for a computer system in which a digital signal processor resides on the computer system's memory bus without requiring a block of dedicated static random access memory. An arbitration cycle is divided into 10 slices of which 5 slices are provided in each arbitration loop to the digital signal processor. Two slices are provided each to the system's I/O interface and to the peripheral bus controller. A final slice is provided to the system's CPU. A default state when no memory bus resource is requesting the system memory bus parks the memory bus on the CPU. The arbitration scheme provides sufficient bandwidth for real-time signal processing by the digital signal processor operating from the system's dynamic random access memory while also providing sufficient bandwidth for a local area network interface through the system's I/O interface.
摘要:
A data bus arbiter for supporting pipelined transactions employs a circular FIFO for storing bus requests. The arbiter includes two pointers which reference the entries of the FIFO. A first pointer is incremented upon detection of the end of a bus cycle. A second pointer is incremented when a new bus cycle is started.
摘要:
Circuit arrangements and methods are disclosed for upgrading an 040-based personal computer system using an optional, peripheral add-in card. In one embodiment, the present invention comprises a PowerPC-based microprocessor, such as the MPC601, having one megabyte of on-board direct mapped level 2 external cache memory arranged as tag and data blocks. The PowerPC-based board is inserted into a processor-direct data path sharing the data and address bus with the 040 microprocessor. System random access memory (RAM), I/O, and other functional blocks are present on the main board comprising the 040-based computer. The MPC601 is coupled via address and data buses to the tag cache, a bus translation unit (BTU), a read only memory (ROM) storing the operating system code for the PowerPC microprocessor, the data cache, a dual frequency clock buffer, and other interface components such as a processor-direct data path including address and data latches. When the computer is turned on, the BTU coupled to the data bus sequentially clears all valid bits in the tag cache, whereafter the cache and memory map are enabled. The 040 processor on the main board is disabled after power-up by using the 040 JTAG test port after inactivating the power-on fast reset. By shifting in appropriate RESET, TCK, and TMS patterns, the 040 will be placed in a nonfunctional, high impedance state. However, DRAM present on the motherboard may be accessed by the 601 after a cache miss. DRAM is accessed via a 601-040 transaction translation operation within the BTU, wherein coded tables map the MPC601 transaction into the appropriate 040 transaction.
摘要:
Circuit arrangements and methods are disclosed for upgrading an 040-based personal computer system using an optional, peripheral add-in card. In one embodiment, the present invention comprises a PowerPC-based microprocessor, such as the MPC601, having one megabyte of on-board direct mapped level 2 external cache memory arranged as tag and data blocks. The PowerPC-based board is inserted into a processor-direct data path sharing the data and address bus with the 040 microprocessor. System random access memory (RAM), I/O, and other functional blocks are present on the main board comprising the 040-based computer. The MPC601 is coupled via address and data buses to the tag cache, a bus translation unit (BTU), a read only memory (ROM) storing the operating system code for the PowerPC microprocessor, the data cache, a dual frequency clock buffer, and other interface components such as a processor-direct data path including address and data latches. When the computer is turned on, the BTU coupled to the data bus sequentially clears all valid bits in the tag cache, whereafter the cache and memory map are enabled. The 040 processor on the main board is disabled after power-up by using the 040 JTAG test port after inactivating the power-on fast reset. By shifting in appropriate RESET, TCK, and TMS patterns, the 040 will be placed in a nonfunctional, high impedance state. However, DRAM present on the motherboard may be accessed by the 601 after a cache miss. DRAM is accessed via a 601-040 transaction translation operation within the BTU, wherein coded tables map the MPC601 transaction into the appropriate 040 transaction.
摘要:
Systems and methods which provide a minimized address tenure to create more efficient memory transactions where the address is not needed for longer than the initial clock cycle in which it is used are described. The exceptions, for example, wherein the address is needed later during the transaction to perform a cache operation, are handled by reasserting the address using the cache controller. In this way, memory transactions are made more efficient but without the use of external latches conventionally used to preserve the deasserted address.
摘要:
According to the present invention, each successive refresh to the multiple banks of a DRAM array is staggered by one clock period. Thus, the time required to refresh one row in each DRAM of each bank at 10 MHz, for example, is equal to 0.7 .mu.sec., or 4.4% of the total allowable maximum time between refresh cycles. This staggered refresh technique avoids large power supply current spikes while minimizing the effect on memory access bandwidth.
摘要:
Cache memory is managed to update the data stored in the cache regardless of whether the address being operated upon is designated as cache inhibited. In this way, the contents of the cache are coherent with main memory so that when the processor redesignates a noncacheable range of addresses to be cacheable, the cache does not need to be flushed. Read operations follow cache inhibit faithfully.
摘要:
An arbiter employs both an address bus arbiter and a data bus arbiter for supporting pipelined, split bus transactions. The address arbiter may be implemented using a state machine. A first through third states of the state machine grant the address bus to a respective first through third bus masters, each having a different priority associated therewith. Idle states are interposed between states. The data bus arbiter may be implemented using a circular FIFO having a plurality of pointers to keep track of present and future bus masters using the data bus.
摘要:
An address bus arbiter is implemented using a state machine. A first through third states of the state machine grant the address bus to a respective first through third bus masters, each having a different priority associated therewith. Idle states are interposed between states. The idle state may be reached from one the bus grant states when a cache controller initiates a tag invalidation cycle or a cache allocation cycle. The idle state may also be reached when a first bus master commences a transaction cycle with a second bus master.