摘要:
A method and apparatus of performing bus transactions on the bus of the computer system. The present invention includes a method and apparatus for permitting out-of-order replies in a pipelined bus system. The out-of-order responses include the sending of tokens between both the requesting agents and the responding agents in the computer system without the use of dedicated token buses.
摘要:
A method and apparatus of performing bus transactions on the bus of the computer system. The present invention includes a method and apparatus for permitting out-of-order replies in a pipelined bus system. The out-of-order responses include the sending of tokens between both the requesting agents and the responding agents in the computer system without the use of dedicated token buses.
摘要:
A method and apparatus of performing bus transactions on the bus of the computer system. The present invention includes a method and apparatus for permitting out-of-order replies in a pipelined bus system. The out-of-order responses include the sending of tokens between both the requesting agents and the responding agents in the computer system without the use of dedicated token buses.
摘要:
A method and apparatus of performing bus transactions on the external bus of the computer system. The present invention includes a method and apparatus for permitting out-of-order replies in a pipelined bus system. The out-of-order responses include the sending of tokens between both the requesting agents and the responding agents in the computer system without the use of dedicated token buses.
摘要:
A method and apparatus of performing bus transactions on the external bus of the computer system. The present invention includes a method and apparatus for permitting out-of-order replies in a pipelined bus system. The out-of-order responses include the sending of tokens between both the requesting agents and the responding agents in the computer system without the use of dedicated token buses.
摘要:
In a computer system, an apparatus for handling lock conditions wherein a first instruction executed by a first processor processes data that is common to a second processor while the second processor is locked from simultaneously executing a second instruction that also processes this same data. A lock bit is set when the first processor begins execution of the first instruction. Thereupon, the second processor is prevented from executing its instruction until the first processor has completed its processing of the shared data. Hence, the second processor queues its request in a buffer. The lock bit is cleared after the first processor has completed execution of its instruction. The first processor then checks the buffer for any outstanding requests. In response to the second processor's queued request, the first processor transmits a signal to the second processor indicating that the data is now not locked.
摘要:
A computer system comprising a plurality of caching agents with a cache hierarchy, the caching agents sharing memory across a system bus and issuing memory access requests in accordance with a protocol wherein a line of a cache has a present state comprising one of a plurality of line states. The plurality of line states includes a modified (M) state, wherein a line of a first caching agent in M state has data which is more recent than any other copy in the system; an exclusive (E) state, wherein a line in E state in a first caching agent is the only one of the agents in the system which has a copy of the data in a line of the cache, the first caching agent modifying the data in the cache line independent of other said agents coupled to the system bus; a shared (S) state, wherein a line in S state indicates that more than one of the agents has a copy of the data in the line; and an invalid (I) state indicating that the line does not exist in the cache. A read or a write to a line in I state results in a cache miss. The present invention associates states with lines and defines rules governing state transitions. State transitions depend on both processor generated activities and activities by other bus agents, including other processors. Data consistency is guaranteed in systems having multiple levels of cache and shared memory and/or multiple active agents, such that no agent ever reads stale data and actions are serialized as needed.
摘要:
A system and method for providing a high performance symmetric arbitration protocol that includes support for priority agents. The bus arbitration protocol supports two classes of bus agents: symmetric agents and priority agents. The symmetric agents support fair, distributed arbitration using a round-robin algorithm. Each symmetric agent has a unique Agent ID assigned at reset. The algorithm arranges the symmetric agents in a circular order of priority. Each symmetric agent also maintains a bus ownership state of busy or idle and a Rotating ID that reflects the symmetric agent with the lowest priority in the next arbitration event. On an arbitration event, the symmetric agent with the highest priority becomes the symmetric owner. However, the symmetric owner is not necessarily the overall bus owner (i.e., a priority agent may be the overall bus owner). The symmetric owner is allowed to take ownership of the bus and issue a transaction on the bus provided no other action of higher priority is preventing the use of the bus. A symmetric owner can maintain ownership without re-arbitrating if the transaction is either a bus-locked or a burst access transaction. The priority agent(s) has higher priority than the symmetric owner. Once the priority agent arbitrates for the bus, it prevents the symmetric owner from issuing any new transactions on the bus unless the new transaction is part of an ongoing bus-locked operation.
摘要:
In a multi-processor system having a first processor, a second processor, and a bus coupling the first processor to the second processor, a method for correcting an erroneous signal corresponding to the first processor while maintaining lock atomicity. When an erroneous transaction is detected, the first processor aborts that transaction and performs a retry. On the retry, an arbitration process arbitrates between the first processor and the second processor to determine which processor is granted access to the bus. If an error is detected during the arbitration process, an arbitration re-synchronization process is initiated. In the arbitration re-synchronization process, bus requests are de-asserted and then re-arbitrated. In the re-arbitration process, the first processor initiates its request ahead of the other processor in order to maintain lock atomicity.
摘要:
A data retrieval system receives a data address identifying data to be retrieved. A portion of the received data address is communicated to a data storage device during a first clock cycle. The system determines a second address portion based on the received data address. The second address portion is communicated to the data storage device during a second clock cycle. Data is then retrieved from the data storage device based on the address portions communicated to the data storage device. The portion of the received data address communicated to the data storage device during the first clock cycle is a set address and the second address portion communicated to the data storage device during the second clock cycle is a way address. A read cycle can be initiated after communicating a portion of the received data address to the data storage device during the first clock cycle.