Abstract:
A request is received that is to reference a first agent and to request a particular line of memory to be cached in an exclusive state. A snoop request is sent intended for one or more other agents. A snoop response is received that is to reference a second agent, the snoop response to include a writeback to memory of a modified cache line that is to correspond to the particular line of memory. A complete is sent to be addressed to the first agent, wherein the complete is to include data of the particular line of memory based on the writeback.
Abstract:
In one embodiment, a processor includes a caching home agent (CHA) coupled to a core and a cache memory and includes a cache controller having a cache pipeline and a home agent having a home agent pipeline. The CHA may: receive, in the home agent pipeline, information from an external agent responsive to a miss for data in the cache memory; issue a global ordering signal from the home agent pipeline to a requester of the data to inform the requester of receipt of the data; and report issuance of the global ordering signal to the cache pipeline, to prevent the cache pipeline from issuance of a global ordering signal to the requester. Other embodiments are described and claimed.
Abstract:
An apparatus and method are described for performing a cache line write back operation. For example, one embodiment of a method comprises: initiating a cache line write back operation directed to a particular linear address; determining if a dirty cache line identified by the linear address exists at any cache of a cache hierarchy comprised of a plurality of cache levels; writing back the dirty cache line to memory if the dirty cache line exists in one of the caches; and responsively maintaining or placing the dirty cache line in an exclusive state in at least a first cache of the hierarchy.
Abstract:
An apparatus and method for reducing or eliminating writeback operations. For example, one embodiment of a method comprises: detecting a first operation associated with a cache line at a first requestor cache; detecting that the cache line exists in a first cache in a modified (M) state; forwarding the cache line from the first cache to the first requestor cache and storing the cache line in the first requestor cache in a second modified (M′) state; detecting a second operation associated with the cache line at a second requestor; responsively forwarding the cache line from the first requestor cache to the second requestor cache and storing the cache line in the second requestor cache in an owned (O) state if the cache line has not been modified in the first requestor cache; and setting the cache line to a shared (S) state in the first requestor cache.
Abstract:
In an embodiment, a processor includes one or more cores, and a distributed caching home agent (including portions associated with each core). Each portion includes a cache controller to receive a read request for data and, responsive to the data not being present in a cache memory associated with the cache controller, to issue a memory request to a memory controller to request the data in parallel with communication of the memory request to a home agent, where the home agent is to receive the memory request from the cache controller and to reserve an entry for the memory request. Other embodiments are described and claimed.
Abstract:
A processor includes a cache-side address monitor unit corresponding to a first cache portion of a distributed cache that has a total number of cache-side address monitor storage locations less than a total number of logical processors of the processor. Each cache-side address monitor storage location is to store an address to be monitored. A core-side address monitor unit corresponds to a first core and has a same number of core-side address monitor storage locations as a number of logical processors of the first core. Each core-side address monitor storage location is to store an address, and a monitor state for a different corresponding logical processor of the first core. A cache-side address monitor storage overflow unit corresponds to the first cache portion, and is to enforce an address monitor storage overflow policy when no unused cache-side address monitor storage location is available to store an address to be monitored.
Abstract:
A method and apparatus to monitor architecture events is disclosed. The architecture events are linked together via a push bus mechanism with each architectural event having a designated time slot. There is at least one branch of the push bus in each core. Each branch of the push bus may monitor one core with all the architectural events. All the data collected from the events by the push bus is then sent to a power control unit.
Abstract:
A physical layer (PHY) is coupled to a serial, differential link that is to include a number of lanes. The PHY includes a transmitter and a receiver to be coupled to each lane of the number of lanes. The transmitter coupled to each lane is configured to embed a clock with data to be transmitted over the lane, and the PHY periodically issues a blocking link state (BLS) request to cause an agent to enter a BLS to hold off link layer flit transmission for a duration. The PHY utilizes the serial, differential link during the duration for a PHY associated task selected from a group including an in-band reset, an entry into low power state, and an entry into partial width state.
Abstract:
A physical layer (PHY) is coupled to a serial, differential link that is to include a number of lanes. The PHY includes a transmitter and a receiver to be coupled to each lane of the number of lanes. The transmitter coupled to each lane is configured to embed a clock with data to be transmitted over the lane, and the PHY periodically issues a blocking link state (BLS) request to cause an agent to enter a BLS to hold off link layer flit transmission for a duration. The PHY utilizes the serial, differential link during the duration for a PHY associated task selected from a group including an in-band reset, an entry into low power state, and an entry into partial width state.
Abstract:
Methods and apparatus relating to memory side prefetch architecture for improved memory bandwidth are described. In an embodiment, logic circuitry transmits a Direct Memory Controller Prefetch (DMCP) request to cause a prefetch of data from memory. A memory controller receives the DMCP request and issues a plurality of read operations to the memory in response to the DMCP request. Data read from the memory is stored in a storage structure in response to the plurality of read operations. Other embodiments are also disclosed and claimed.