摘要:
This invention is a means to definitively establish the occurrence of various clock edges used in a design, balancing clock edges at various locations within an integrated circuit. Clocks entering from outside sources can be a source of on-chip-variations (OCV) resulting in unacceptable clock edge skewing. The present invention arranges placement of the various clock dividers on the chip at remote locations where these clocks are used. This minimizes the uncertainty of the edge occurrence.
摘要:
A system has memory resources accessible by a central processing unit (CPU). One or more transaction requests are initiated by the CPU for access to one or more of the memory resources. Initiation of transaction requests is ceased for a period of time. The memory resources are monitored to determine when all of the transaction requests initiated by the CPU have been completed. An idle signal accessible by the CPU is provided that is asserted when all of the transaction requests initiated by the CPU have been completed.
摘要:
A system has memory resources accessible by a central processing unit (CPU). One or more transaction requests are initiated by the CPU for access to one or more of the memory resources. Initiation of transaction requests is ceased for a period of time. The memory resources are monitored to determine when all of the transaction requests initiated by the CPU have been completed. An idle signal accessible by the CPU is provided that is asserted when all of the transaction requests initiated by the CPU have been completed.
摘要:
In an embodiment of the invention, an integrated circuit includes a pipelined memory array and a memory control circuit. The pipelined memory array contains a plurality of memory banks. Based partially on the read access time information of a memory bank, the memory control circuit is configured to select the number of clock cycles used during read latency.
摘要:
In an embodiment of the invention, an integrated circuit includes a pipelined memory array and a memory control circuit. The pipelined memory array contains a plurality of memory banks. Based partially on the read access time information of a memory bank, the memory control circuit is configured to select the number of clock cycles used during read latency.
摘要:
A queuing requester for access to a memory system. Transaction requests received from two or more requestors access to the memory system. Each transaction request includes an associated priority value. A request queue is formed in the queuing requester. Each transaction request includes an associated priority value. A highest priority value of all pending transaction requests within the request queue is determined. An elevated priority value is selected when the highest priority value is higher than the priority value of an oldest transaction request in the request queue; otherwise the priority value of the oldest transaction request is selected. The oldest transaction request in the request queue with the selected priority value is then provided to the memory system. An arbitration contest with other requesters for access to the memory system uses the selected priority value.
摘要:
This invention is a means to definitively establish the occurrence of various clock edges used in a design, balancing clock edges at various locations within an integrated circuit. Clocks entering from outside sources can be a source of on-chip-variations (OCV) resulting in unacceptable clock edge skewing. The present invention arranges placement of the various clock dividers on the chip at remote locations where these clocks are used. This minimizes the uncertainty of the edge occurrence.
摘要:
In an embodiment of the invention, a pipelined memory bank is tested by scanning test patterns into an integrated circuit. Test data is formed from the test patterns and shifted into a scan-in chain in the pipelined memory bank. The test data in the scan-in chain is launched into the inputs of the pipelined memory bank during a first clock cycle. Data from the outputs of the pipelined memory bank is captured in a scan-out chain during a second cycle where the time between the first and second clock cycles is equal to or greater than the read latency of the memory bank.
摘要:
The level two memory of this invention supports coherency data transfers with level one cache and DMA data transfers. The width of DMA transfers is 16 bytes. The width of level one instruction cache transfers is 32 bytes. The width of level one data transfers is 64 bytes. The width of level two allocates is 128 bytes. DMA transfers are interspersed with CPU traffic and have similar requirements of efficient throughput and reduced latency. An additional challenge is that these two data streams (CPU and DMA) require access to the level two memory at the same time. This invention is a banking technique for the level two memory to facilitate efficient data transfers.
摘要:
In an embodiment of the invention, a pipelined memory bank is tested by scanning test patterns into an integrated circuit. Test data is formed from the test patterns and shifted into a scan-in chain in the pipelined memory bank. The test data in the scan-in chain is launched into the inputs of the pipelined memory bank during a first clock cycle. Data from the outputs of the pipelined memory bank is captured in a scan-out chain during a second cycle where the time between the first and second clock cycles is equal to or greater than the read latency of the memory bank.