摘要:
Requests to memory issued by an agent on a bus are satisfied while maintaining cache consistency. The requesting agent may issue a request to another agent, or the memory unit, by placing the request on the bus. Each agent on the bus snoops the bus to determine whether the issued request can be satisfied by accessing its cache. An agent which can satisfy the request using its cache, i.e., the snooping agent, issues a signal to the requesting agent indicating so. The snooping agent places the cache line which corresponds to the request onto the bus, which is retrieved by the requesting agent. In the event of a read request, the memory unit also retrieves the cache line data from the bus and stores the cache line in main memory. In the event of a write request, the requesting agent transfers write data over the bus along with the request. This write data is retrieved by both the memory unit, which temporarily stores the data, and the snooping agent. Subsequently, the snooping agent transfers the entire cache line over the bus. The memory unit retrieves this cache line, merges it with the write data previously stored, and writes the merged cache line to memory.
摘要:
A self-snooping mechanism for enabling a processor being coupled to dedicated cache memory and a processor-system bus to snoop its own request issued on the processor-system bus. The processor-system bus enables communication between the processor and other bus agents such as a memory subsystem, I/O subsystem and/or other processors. The self-snooping mechanism is commenced upon determination that the request is based on a boundary condition so that initial internal cache lookup is bypassed to improve system efficiency.
摘要:
Requests to memory issued by an agent on a bus are satisfied while maintaining cache consistency. The requesting agent may issue a request to another agent, or the memory unit, by placing the request on the bus. Each agent on the bus snoops the bus to determine whether the issued request can be satisfied by accessing its cache. An agent which can satisfy the request using its cache, i.e., the snooping agent, issues a signal to the requesting agent indicating so. The snooping agent places the cache line which corresponds to the request onto the bus, which is retrieved by the requesting agent. In the event of a read request, the memory unit also retrieves the cache line data from the bus and stores the cache line in main memory. In the event of a write request, the requesting agent transfers write data over the bus along with the request. This write data is retrieved by both the memory unit, which temporarily stores the data, and the snooping agent. Subsequently, the snooping agent transfers the entire cache line over the bus. The memory unit retrieves this cache line, merges it with the write data previously stored, and writes the merged cache line to memory.
摘要:
A computer system is disclosed having a requesting bus agent that issues a communication transaction over a bus and an addressed bus agent that defers the communication transaction to avoid high bus latency. The addressed bus agent later issues a deferred reply transaction over the bus to complete the communication transaction. Special snoop ownership and cache state transition rules maintain cache coherency and processor consistency during deferred communication transactions.
摘要:
A dynamic pipeline depth control method and apparatus is used with a bus which supports pipelined bus transactions. An agent coupled to the bus includes both a transmitter and a receiver. The transmitter is used to transmit an indication to the other agents coupled to the bus which prevents the other agents from issuing a transaction on the bus. The receiver is used to receive the indication, from another agent, that prevents the agent from issuing a transaction on the bus.
摘要:
Atomicity of lock variables is preserved in a computer system in response to a request by a microprocessor for a bus lock access whether the lock variable is split between two cache lines or is within a single cache line. A non-split lock bus access which can be satisfied by a cacheable region within the same cluster as the microprocessor issuing the access is allowed to complete, regardless of whether ownership of the next level bus is available. If the non-split lock access can not be satisfied within the cluster, then ownership of the next level bus is obtained, if available, to satisfy the access. Similarly, a split lock access may complete if ownership of the second level bus can be obtained. However, a split lock access is aborted if the second level bus ownership is not available, regardless of whether a cacheable region within the same cluster can satisfy the request.
摘要:
A system and method for providing a high performance symmetric arbitration protocol that includes support for priority agents. The bus arbitration protocol supports two classes of bus agents: symmetric agents and priority agents. The symmetric agents support fair, distributed arbitration using a round-robin algorithm. Each symmetric agent has a unique Agent ID assigned at reset. The algorithm arranges the symmetric agents in a circular order of priority. Each symmetric agent also maintains a bus ownership state of busy or idle and a Rotating ID that reflects the symmetric agent with the lowest priority in the next arbitration event. On an arbitration event, the symmetric agent with the highest priority becomes the symmetric owner. However, the symmetric owner is not necessarily the overall bus owner (i.e., a priority agent may be the overall bus owner). The symmetric owner is allowed to take ownership of the bus and issue a transaction on the bus provided no other action of higher priority is preventing the use of the bus. A symmetric owner can maintain ownership without re-arbitrating if the transaction is either a bus-locked or a burst access transaction. The priority agent(s) has higher priority than the symmetric owner. Once the priority agent arbitrates for the bus, it prevents the symmetric owner from issuing any new transactions on the bus unless the new transaction is part of an ongoing bus-locked operation.
摘要:
A computer system, and a method performed by it, having a mechanism for ensuring consistency of data among various level(s) of caching in a multi-level hierarchical memory system. The cache consistency mechanism includes an external bus request queue which and associated mechanism, which cooperate to monitor and control the issuance of data requests, such as read requests and write requests, onto an external bus. The computer system includes one or more CPUs each having this consistency mechanism.
摘要:
A method and apparatus of performing bus transactions on the bus of the computer system. The present invention includes a method and apparatus for permitting out-of-order replies in a pipelined bus system. The out-of-order responses include the sending of tokens between both the requesting agents and the responding agents in the computer system without the use of dedicated token buses.
摘要:
A method and apparatus for cache memory replacement line identification have a cache interface which provides a communication interface between a cache memory and a controller for the cache memory. The interface includes an address bus, a data bus, and a status bus. The address bus transfers requested addresses from the controller to the cache memory. The data bus transfers data associated with requested addresses from the controller to the cache memory, and also transfers replacement line addresses from the cache memory to the controller. The status bus transfers status information associated with the requested addresses from the cache memory to the controller which indicate whether the requested addresses are contained in the cache memory. In one embodiment, the data bus also transfers cache line data associated with a requested address from the cache memory to the controller when the requested address hits the cache memory.