摘要:
A computer system includes an adaptive memory arbiter for prioritizing memory access requests, including a self-adjusting, programmable request-priority ranking system. The memory arbiter adapts during every arbitration cycle, reducing the priority of any request which wins memory arbitration. Thus, a memory request initially holding a low priority ranking may gradually advance in priority until that request wins memory arbitration. Such a scheme prevents lower-priority devices from becoming “memory-starved.” Because some types of memory requests (such as refresh requests and memory reads) inherently require faster memory access than other requests (such as memory writes), the adaptive memory arbiter additionally integrates a nonadjustable priority structure into the adaptive ranking system which guarantees faster service to the most urgent requests. Also, the adaptive memory arbitration scheme introduces a flexible method of adjustable priority-weighting which permits selected devices to transact a programmable number of consecutive memory accesses without those devices losing request priority.
摘要:
A computer system includes an adaptive memory arbiter for prioritizing memory access requests, including a self-adjusting, programmable request-priority ranking system. The memory arbiter adapts during every arbitration cycle, reducing the priority of any request which wins memory arbitration. Thus, a memory request initially holding a low priority ranking may gradually advance in priority until that request wins memory arbitration. Such a scheme prevents lower-priority devices from becoming “memory-starved.” Because some types of memory requests (such as refresh requests and memory reads) inherently require faster memory access than other requests (such as memory writes), the adaptive memory arbiter additionally integrates a nonadjustable priority structure into the adaptive ranking system which guarantees faster service to the most urgent requests. Also, the adaptive memory arbitration scheme introduces a flexible method of adjustable priority-weighting which permits selected devices to transact a programmable number of consecutive memory accesses without those devices losing request priority.
摘要:
A computer system includes a CPU and a memory device coupled by a bridge logic unit. CPU to memory write requests (including the data to be written) are temporarily stored in a queue in the bridge logic unit. The bridge logic unit preferably begins a write cycle to the memory device before all of the write data has been stored in the queue and available to the memory device. By beginning the memory cycle as early as possible, the total amount of time required to store all of the write data in the queue and then de-queue the data from the queue is reduced. Consequently, many CPU to memory write transactions are performed more efficiently and generally with less latency than previously possible.
摘要:
A computer system includes a CPU, a memory device, two expansion buses, and a bridge logic unit coupling together the CPU, the memory device and the expansion buses. The CPU couples to the bridge logic unit via a CPU bus and the memory device couples to the bridge logic unit via a memory bus. The bridge logic unit generally routes bus cycle requests from one of the four buses to another of the buses while concurrently routing bus cycle requests to another pair of buses. The bridge logic unit preferably includes four interfaces, one each to the CPU, memory device and the two expansion buses. Each pair of interfaces are coupled by at least one queue; write requests are stored (or “posted”) in write queues and read data are stored in read queues. Because each interface can communicate concurrently with all other interfaces via the read and write queues, the possibility exists that a first interface cannot access a second interface because the second interface is busy processing read or write requests from a third interface, thus starving the first interface for access to the second interface. To remedy this starvation problem, the bridge logic unit prevents the third interface from posting additional write requests to its write queue, thereby permitting the first interface access to the second interface. Further, read cycles may be retried from one interface to allow another interface to complete its bus transactions.