摘要:
The present invention provides a delay-locked loop (DLL). The DLL comprises a phase-frequency detector (PFD) for receiving a reference signal. The DLL further includes a charge pump which is coupled to the PFD. The DLL also includes a loop filter which is coupled to the charge pump and the PFD. Additionally in the DLL, delay line means is coupled to the charge pump and the loop filter. The delay line means provides a feedback signal to the PFD. The DLL further includes monitor means coupled to the PFD, the charge pump and the loop filter. The monitor means is for detecting when a voltage across the loop filter is at a predetermined level, wherein when the voltage is at the predetermined level the monitor means causes the PFD to enter a pump-down mode until the feedback signal is aligned with the reference signal. An advantage of the present invention is that DLL loop tracking failures based upon a stuck condition are reliably avoided. Specifically, the DLL in accordance with the present invention can reliably recover from the stuck condition in which the adjustable delay is at its lower limit and the PFD asserts the UP control signal. Additionally, the DLL is cost effective and is easily implemented utilizing existing processes.
摘要:
A random access memory circuit comprises a plurality of memory cells and at least one decoder coupled to the memory cells, the decoder being configurable for receiving an input address and for accessing one or more of the memory cells in response thereto. The random access memory circuit further comprises a plurality of sense amplifiers operatively coupled to the memory cells, the sense amplifiers being configurable for determining a logical state of one or more of the memory cells. A controller coupled to at least a portion of the sense amplifiers is configurable for selectively operating in at least one of a first mode and a second mode. In the first mode of operation, the controller enables one of the sense amplifiers corresponding to the input address and disables the sense amplifiers not corresponding to the input address. In the second mode of operation, the controller enables substantially all of the sense amplifiers. The memory circuit advantageously provides an adaptable latency by controlling the mode of operation of the circuit.
摘要:
The disclosure relates to accessing memory content with a high temporal locality of reference. An embodiment of the disclosure stores the content in a data buffer, determines that the content of the data buffer has a high temporal locality of reference, and accesses the data buffer for each operation targeting the content instead of a cache storing the content.
摘要:
Systems and methods are disclosed for maintaining an instruction cache including extended cache lines and page attributes for main cache line portions of the extended cache lines and, at least for one or more predefined potential page-crossing instruction locations, additional page attributes for extra data portions of the corresponding extended cache lines. In addition, systems and methods are disclosed for processing page-crossing instructions fetched from an instruction cache having extended cache lines.
摘要:
The disclosure is directed to a weakly-ordered processing system and method for enforcing strongly-ordered memory access requests in a weakly-ordered processing system. The processing system includes a plurality of memory devices and a plurality of processors. Each of the processors are configured to generate memory access requests to one or more of the memory devices, with each of the memory access requests having an attribute that can be asserted to indicate a strongly-ordered request. The processing system further includes a bus interconnect configured to interface the processors to the memory devices, the bus interconnect being further configured to enforce ordering constraints on the memory access requests based on the attributes.
摘要:
Address translation performance within a processor is improved by identifying an address that causes a boundary crossing between different pages in memory and linking address translation information associated with both memory pages. According to one embodiment of a processor, the processor comprises circuitry configured to recognize an access to a memory region crossing a page boundary between first and second memory pages. The circuitry is also configured to link address translation information associated with the first and second memory pages. Thus, responsive to a subsequent access the same memory region, the address translation information associated with the first and second memory pages is retrievable based on a single address translation.
摘要:
A processor implements an apparatus and a method for predicting an indirect branch address. A target address generated by an instruction is automatically identified. A predicted next program address is prepared based on the target address before an indirect branch instruction utilizing the target address is speculatively executed. The apparatus suitably employs a register for holding an instruction memory address that is specified by a program as a predicted indirect address of an indirect branch instruction. The apparatus also employs a next program address selector that selects the predicted indirect address from the register as the next program address for use in speculatively executing the indirect branch instruction.
摘要:
Techniques and methods are used to control allocations to a higher level cache of cache lines displaced from a lower level cache. The allocations of the displaced cache lines are prevented for displaced cache lines that are determined to be redundant in the next level cache, whereby castouts are controlled. To such ends, a line is selected to be displaced in a lower level cache. Information associated with the selected line is identified which indicates that the selected line is present in a higher level cache. An allocation of the selected line in the higher level cache is prevented based on the identified information.
摘要:
A processor provides two-level interrupt servicing. In one embodiment, the processor comprises a storage device and an interrupt handler. The storage device is configured to store an interrupt identifier corresponding to an interrupt request. The interrupt handler is configured to recognize the interrupt request, initiate a common interrupt service routine responsive to recognizing the interrupt request and subsequently initiate an interrupt service routine corresponding to the stored interrupt identifier.
摘要:
In a pipelined processor, a pre-decoder in advance of an instruction cache calculates the branch target address (BTA) of PC-relative and absolute address branch instructions. The pre-decoder compares the BTA with the branch instruction address (BIA) to determine whether the target and instruction are in the same memory page. A branch target same page (BTSP) bit indicating this is written to the cache and associated with the instruction. When the branch is executed and evaluated as taken, a TLB access to check permission attributes for the BTA is suppressed if the BTA is in the same page as the BIA, as indicated by the BTSP bit. This reduces power consumption as the TLB access is suppressed and the BTA/BIA comparison is only performed once, when the branch instruction is first fetched. Additionally, the pre-decoder removes the BTA/BIA comparison from the BTA generation and selection critical path.