摘要:
A frequency shift detection circuit for detecting a frequency shift between a first signal and a second signal includes two or more delay circuits coupled to one another in series and two or more comparison logic circuits. The first delay circuit in the series receives one of the first and second signals and produces a delayed replica. Each of the other delay circuits receives the delayed replica produced by the previous delay circuit in the series and produces a further delayed replica. Thus, the signal produced by each delay circuit is delayed from the original signal by a different amount. Each comparison logic circuit receives one of the delayed replicas and receives the other one of the first and second signals, i.e., the one that is not received by the delay circuits. In response, the comparison logic circuit produces a frequency shift detection signal when it detects a phase difference between that other one of said first and second signals and the delayed replica. By selecting or tapping one of the outputs of the comparison logic circuits, a user can select the detection granularity.
摘要:
A method for removing an external frequency divider and clock formation circuit from a feedback path of a phase locked loop and a phase selector circuit are provided for synchronizing an external frequency divider with a reference clock of a phase locked loop. A reference clock signal is applied to the phase locked loop. An output of the phase locked loop is coupled through a predefined delay and provides a delayed feedback clock signal input to the phase locked loop. The external frequency divider is located at the output of the phase locked loop external to the predefined delay and outside the feedback clock signal path of the phase locked loop. A phase selector circuit identifies a correct phase of the reference clock signal and starts the external frequency divider. The phase selector circuit includes an edge detector, a synchronization divider, and a reset machine.
摘要:
A method and apparatus are provided for target addressing and translation in a non-uniform memory environment with user defined target tags. The apparatus for target addressing and translation includes a processor and a first address translation unit coupled to the processor. The first address translation unit translates an effective address (EA) to a real address (RA). The first address translation unit includes a target tag associated with each address translation. A second address translation unit translates a real address (RA) to a target address (TA). The second address translation unit includes a target tag associated with each address translation. A cache includes a cache directory and a target tag is stored into the cache directory with each cache fill.
摘要:
A method and apparatus are provided for implementing direct memory access (DMA) with dataflow blocking for users for processing data communications in a communications system. A DMA starting address register receives an initial DMA starting address and a DMA length register receives an initial DMA length. A DMA state machine receives a control input for starting the DMA. The DMA state machine updates the DMA starting address to provide a current DMA starting address. The DMA state machine loads a DMA ending address. A DMA blocking logic receives the current DMA starting address and the DMA ending address and blocks received memory requests only within a current active DMA region.
摘要:
A method handles concurrent address translation cache misses and hits under those misses while maintaining command order based upon virtual channel. Commands are stored in a command processing unit that maintains ordering of the commands. A command buffer index is assigned to each address being sent from the command processing unit to an address translation unit. When an address translation cache miss occurs, a memory fetch request is sent. The CBI is passed back to the command processing unit with a signal to indicate that the fetch request has completed. The command processing unit uses the CBI to locate the command and address to be reissued to the address translation unit.
摘要:
A method and a cache control circuit for replacing a cache line using an alternate pseudo least-recently-used (PLRU) algorithm with a victim cache coherency state, and a design structure on which the subject cache control circuit resides are provided. When a requirement for replacement in a congruence class is identified, a first PLRU cache line for replacement and an alternate PLRU cache line for replacement in the congruence class are calculated. When the first PLRU cache line for replacement is in the victim cache coherency state, the alternate PLRU cache line is picked for use.
摘要:
In a first aspect, a first method of issuing a command on a bus of a system is provided. The first method includes the steps of (1) receiving a first functional memory command in the system; (2) receiving a command to force the system to execute functional memory commands in order; (3) receiving a second functional memory command in the system; and (4) employing a dependency matrix to indicate the second functional memory command requires access to a same address as the first functional memory command whether or not the second functional memory command actually has an ordering dependency on the first functional memory command. The dependency matrix is adapted to store data indicating whether a functional memory command received by the system has an ordering dependency on one or more functional memory commands previously received by the system. Numerous other aspects are provided.
摘要:
In a first aspect, a first method of issuing a command on a bus is provided. The first method includes the steps of (1) receiving a first command associated with a first address; (2) delaying the issue of the first command on the bus for a time period; (3) if a second command associated with a second address contiguous with the first address is not received before the time period elapses, issuing the first command on the bus after the time period elapses; and (4) if the second command associated with the second address contiguous with the first address is received before the first command is issued on the bus, combining the first and second commands into a combined command associated with the first address. Numerous other aspects are provided.
摘要:
A first first-in-first-out (FIFO) memory may receive first processor input from a first processor group that includes a first processor. The first processor group is configured to execute program code based on the first processor input that includes a set of input signals, a clock signal, and corresponding data. The first FIFO may store the first processor input and may output the first processor input to a second FIFO memory and to a second processor according to a first delay. The second FIFO memory may store the first processor input and may output the first processor input to a third processor according to a second delay. The second processor may execute at least a first portion of the program code and the third processor may execute at least a second portion of the program code responsive to the first processor input.
摘要:
A first first-in-first-out (FIFO) memory may receive first processor input from a first processor group that includes a first processor. The first processor group is configured to execute program code based on the first processor input that includes a set of input signals, a clock signal, and corresponding data. The first FIFO may store the first processor input and may output the first processor input to a second FIFO memory and to a second processor according to a first delay. The second FIFO memory may store the first processor input and may output the first processor input to a third processor according to a second delay. The second processor may execute at least a first portion of the program code and the third processor may execute at least a second portion of the program code responsive to the first processor input.