摘要:
A system and method for pre-fetching data signals is disclosed. According to one aspect of the invention, an Instruction Processor (IP) generates requests to access data signals within the cache. Predetermined ones of the requests are provided to pre-fetch control logic, which determines whether the data signals are available within the cache. If not, the data signals are retrieved from another memory within the data processing system, and are stored to the cache. According to one aspect, the rate at which pre-fetch requests are generated may be programmably selected to match the rate at which the associated requests to access the data signals are provided to the cache. In another embodiment, pre-fetch control logic receives information to generate pre-fetch requests using a dedicated interface coupling the pre-fetch control logic to the IP.
摘要:
An improved system and method are provided for initializing memory in a data processing system. According to one aspect of the invention, a “page zero” instruction is provided that may be executed by an Instruction Processor to initiate memory initialization. Upon instruction execution, the IP issues one or more page zero requests using a background interface of the IP. In one embodiment, each request results in the initialization of a page of memory. While page zero requests are issued over the background interface, the IP may continue issuing other read and write requests to memory over a primary interface of the IP.
摘要:
A system and method is provided to selectively flush data from cache memory to a main memory irrespective of the replacement algorithm that is used to manage the cache data. According to one aspect of the invention, novel “page flush” and “cache line flush” instructions are provided to flush a page and a cache line of memory data, respectively, from a cache to a main memory. In one embodiment, these instructions are included within the hardware instruction set of an Instruction Processor (IP). According to another aspect of the invention, flush operations are initiated using a background interface that interconnects the IP with its associated cache memory. A primary interface that also interconnects the IP to the cache memory is used to simultaneously issue higher-priority requests so that processor throughput is increased.
摘要:
A mechanism to selectively leak data signals from a cache memory is provided. According to one aspect of the invention, an Instruction Processor (IP) is coupled to generate requests to access data signals within the cache. Some requests include a leaky designator, which is activated if the associated data signals are considered “leaky”. These data signals are flushed from the cache memory after a predetermined delay has occurred. The delay is provided to allow the IP to complete any subsequent requests for the same data before the flush operation is performed, thereby preventing memory thrashing. Pre-fetch logic may also be provided to pre-fetch the data signals associated with the requests. In one embodiment, the rate at which data signals are flushed from cache memory is programmable, and is based on the rate at which requests are processing for pre-fetch purposes.
摘要:
A cache arrangement of a data processing system provides a cache flush operation initiated by a command from a maintenance processor. The cache arrangement includes a cache memory, a mode register, and a controller. The mode register is settable by the maintenance processor to one of first and second values. The controller selectively writes all of the modified information in the cache memory to the system memory responsive to the command. Also in response to this command, all of the information is invalidated in the cache memory if the mode register is set to the second value. In one embodiment, none of the information except the modified data is invalidated if the mode register is set to the first value. The second value may be utilized to efficiently reassign one or more cache memories to a new partition.
摘要:
A system and method is disclosed for prioritizing requests received from multiple requesters for presentation to a shared resource. The system includes logic that implements multiple priority schemes. This logic may be programmably configured to associate each of the requesters with any of the priority schemes. The priority scheme that is associated with the requester controls how that requester submits requests to the shared resource. The requests that have been submitted by any of the requesters in this manner are then processed in a predetermined order. This order is established using an absolute priority assigned to each of the requesters. This order may further be determined by assigning one or more requesters a priority that is relative to another requester. The absolute and relative priority assignments are programmable.
摘要:
An apparatus for efficiently detecting an error on a memory stack write pointer or a memory stack read pointer by continuously monitoring the relative position between the two pointers. Using this technique, the present invention may detect certain classes of errors that cannot be detected by other error detection methods such as redundancy. The present invention eliminates the need to provide full redundancy thereby potentially saving considerable cost, size and power in a typical computer system.