摘要:
A hardware design technique allows checking of design system language (DSL) specification of an element and schematics of large macros with embedded arrays and registers. The hardware organization reduces CPU time for logical verification by exponential order of magnitude without blowing up a verification process or logic simulation. The hardware organization consists of horizontal word level rather than bit level. A memory array cell comprises a pair of cross-coupled inverters forming a first latch for storing data. The first latch has an output connected to a read bit line. True and complement write word and bit line input to the first latch. A first set of pass gates connects between the true and complement write word and bit line inputs via gates and the input of said first latch. The first set of pass gates is responsive to a first clock via a second pass gate. A pair of cross-coupled inverters forms a second latch of a Level Sensitive Scan Design (LSSD). The second latch has output connected to an LSSD output for design verification. A second pass gate connects between the output of the first set of pass gates and the input of said first latch. The second pass gate is responsive to said first clock. A third pass gate connects between the output of said first latch and the input of said second latch. The third pass gate is responsive to a second clock. The first and second clocks are responsive to a black boxing process for incremental verification.
摘要:
In a microprocessor having a load/store unit and prefetch hardware, the prefetch hardware includes a prefetch queue containing entries indicative of allocated data streams. A prefetch engine receives an address associated with a store instruction executed by the load/store unit. The prefetch engine determines whether to allocate an entry in the prefetch queue corresponding to the store instruction by comparing entries in the queue to a window of addresses encompassing multiple cache blocks, where the window of addresses is derived from the received address. The prefetch engine compares entries in the prefetch queue to a window of 2M contiguous cache blocks. The prefetch engine suppresses allocation of a new entry when any entry in the prefetch queue is within the address window. The prefetch engine further suppresses allocation of a new entry when the data address of the store instruction is equal to an address in a border area of the address window.
摘要:
An apparatus for integer exception register (XER) renaming and methods of using the same are implemented. In a central processing unit (CPU) having a pipelined architecture, integer instructions that use or update the XER may be executed out-of-order using the XER renaming mechanism. Any instruction that updates the XER has an associated instruction identifier (IID) stored in a register. Subsequent instructions that use data in the XER use the stored IID to determine when the XER data has been updated by the execution of the instruction corresponding to the stored IID. As each instruction that updates XER data is executed, the data is stored in an XER rename buffer. Instructions using XER data then obtain the updated, valid, XER data from the rename buffer. In this way, these instructions can obtain valid XER data prior to completion of the preceding instructions. The XER data is retrieved from the XER rename buffer by indexing into the buffer by using an index derived from the stored IID. Because the updated XER data is available in the rename buffer before the updating instruction completes, out-of-order execution of instructions using or updating XER data is thereby realized.