摘要:
A data processing system with a snooper that is capable of dynamically enabling and disabling its snooping capabilities (i.e., snoop detect and response). The snooper is connected to a bus controller via a plurality of interconnects, including a snooperPresent signal, a snoop response signal and a snoop detect signal. When the snooperPresent signal is asserted, subsequent snoop requests are sent to the snooper, and the snooper is polled for a snoop response. Each snooper is capable of responding at different times (i.e., each snooper operates with different snoop latencies). The bus controller individually tracks the snoop response received from each snooper with the snooperPresent signal enabled. Whenever the snooper wishes to deactivate its snooping capabilities/operations, the snooper de-asserts the snooperPresent signal. The bus controller recognizes this as an indication that the snooper is unavailable. Thus, when the bus controller broadcasts subsequent snoop requests, the bus controller does not send the snoop request to the snooper.
摘要:
A method for prioritizing snoop pushes in a data processing system that schedules requests within a request FIFO. Each new request that is received is placed in the last position of the request FIFO and the request FIFO typically grants request based solely on the order within the request FIFO. As prior requests are sequentially granted the subsequent requests move closer to a first position of the request FIFO. However, when a snoop push is received at the request FIFO, the snoop push is automatically inserted at the first position of the request FIFO ahead of all yet to be granted requests within the request FIFO.
摘要:
A method and system for ensuring orderly forward progress in granting snoop castout requests. Masters may include a tag (“request tag”) in their transfer requests to a bus macro. The request tag indicates the order of the request issued by the master. If the bus macro determines that the transfer request is snoopable, then the bus macro broadcasts a snoop request that includes the request tag. If a snoop controller determines that the address in the snoop request is a hit to a modified coherency granule in an associated cache, then the master associated with that snoop controller transmits a castout request to the bus macro that includes the request tag associated with the snoop request. The bus macro uses the request tag to determine whether the castout request is a response to the oldest in a series of pipelined snoop requests to be serviced.
摘要:
The disclosure is directed to a weakly-ordered processing system and method for enforcing strongly-ordered memory access requests in a weakly-ordered processing system. The processing system includes a plurality of memory devices and a plurality of processors. Each of the processors are configured to generate memory access requests to one or more of the memory devices, with each of the memory access requests having an attribute that can be asserted to indicate a strongly-ordered request. The processing system further includes a bus interconnect configured to interface the processors to the memory devices, the bus interconnect being further configured to enforce ordering constraints on the memory access requests based on the attributes.
摘要:
The disclosure is directed to a weakly-ordered processing system and method for enforcing strongly-ordered memory access requests in a weakly-ordered processing system. The processing system includes a plurality of memory devices and a plurality of processors. A bus interconnect is configured to interface the processors to the memory devices. The bus interconnect is further configured to enforce an ordering constraint for a strongly-ordered memory access request from an originating processor to a target memory device by sending a memory barrier to each of the other memory devices accessible by the originating processor, except for those memory devices that the bus interconnect can confirm have no unexecuted memory access requests from the originating processor.
摘要:
Semaphore operation manages exclusive access to a memory that is shared by a plurality of processing elements. Semaphore reservation status for exclusive access by a processing element is monitored by a memory controller. To clear an obsolete reservation status, a command signal is transmitted for a write operation to the memory while prohibiting update of the contents of a memory. The reservation status at the controller is changed from a reservation state to a non-reservation state in response to receipt of the command signal.
摘要:
A processing system and method of communicating within the processing system is disclosed. The processing system may include a bus; a memory region coupled to the bus; and a plurality of processing components having access to the memory region over the bus, each of the processing components being configured to perform a semaphore operation to gain access to the memory region by simultaneously requesting a read operation and a write operation to a semaphore location over the bus.
摘要:
Techniques and methods are used to control allocations to a higher level cache of cache lines displaced from a lower level cache. The allocations of the displaced cache lines are prevented for displaced cache lines that are determined to be redundant in the next level cache, whereby castouts are controlled. To such ends, a line is selected to be displaced in a lower level cache. Information associated with the selected line is identified which indicates that the selected line is present in a higher level cache. An allocation of the selected line in the higher level cache is prevented based on the identified information.
摘要:
Efficient techniques are described for controlling ordered accesses in a weakly ordered storage system. A stream of memory requests is split into two or more streams of memory requests and a memory access counter is incremented for each memory request. A memory request requiring ordered memory accesses is identified in one of the two or more streams of memory requests. The memory request requiring ordered memory accesses is stalled upon determining a previous memory request from a different stream of memory requests is pending. The memory access counter is decremented for each memory request guaranteed to complete. A count value in the memory access counter that is different from an initialized state of the memory access counter indicates there are pending memory requests. The memory request requiring ordered memory accesses is processed upon determining there are no further pending memory requests.
摘要:
Techniques and methods are used to reduce allocations to a higher level cache of cache lines displaced from a lower level cache. The allocations of the displaced cache lines are prevented for displaced cache lines that are determined to be redundant in the next level cache, whereby castouts are reduced. To such ends, a line is selected to be displaced in a lower level cache. Information associated with the selected line is identified which indicates that the selected line is present in a higher level cache. An allocation of the selected line in the higher level cache is prevented based on the identified information. Preventing an allocation of the selected line saves power that would be associated with the allocation.