Abstract:
In an embodiment, a processor may include an execution engine to execute a plurality of instructions, a memory to store a tagged data structure comprising a plurality of entries, and an eviction circuit. The eviction circuit may be to: generate a pseudo-random number responsive to an eviction request for the tagged data structure; in response to a determination that the pseudo-random number is outside of a valid eviction range for the plurality of entries, generate an alternative identifier by rotating through the valid eviction range, the valid eviction range comprising a range of numbers that are valid to identify victim entries of the tagged data structure; and evict a victim entry from the tagged data structure, the victim entry associated with the alternative identifier. Other embodiments are described and claimed.
Abstract:
A processor, including a core; and a cache-coherent memory fabric coupled to the core and having a primary cache agent (PCA) configured to provide a primary access path; and a secondary cache agent (SCA) configured to provide a secondary access path that is redundant to the primary access path, wherein the PCA has a coherency controller configured to maintain data in the secondary access path coherent with data in the main access path.
Abstract:
In one embodiment, an apparatus includes an interconnect to couple a plurality of processing circuits. The interconnect may include a pipe stage circuit coupled between a first processing circuit and a second processing circuit. This pipe stage circuit may include: a pipe stage component having a first input to receive a signal via the interconnect and a first output to output the signal; and a selection circuit having a first input to receive the signal from the first output of the pipe stage component and a second input to receive the signal via a bypass path, where the selection circuit is dynamically controllable to output the signal received from the first output of the pipe stage component or the signal received via the bypass path. Other embodiments are described and claimed.
Abstract:
In an embodiment, an apparatus comprises: a first component to perform coherent operations; and a coherent fabric logic coupled to the first component via a first component interface. The coherent fabric logic may be configured to perform full coherent fabric functionality for coherent communications between the first component and a second component coupled to the coherent fabric logic. The first component may include a packetization logic to communicate packets with the coherent fabric logic, but not include coherent interconnect interface logic to perform coherent fabric functionality. Other embodiments are described and claimed.
Abstract:
Methods, systems, and apparatus for implementing low latency interconnect switches between CPU's and associated protocols. CPU's are configured to be installed on a main board including multiple CPU sockets linked in communication via CPU socket-to-socket interconnect links forming a CPU socket-to-socket ring interconnect. The CPU's are also configured to transfer data between one another by sending data via the CPU socket-to-socket interconnects. Data may be transferred using a packetized protocol, such as QPI, and the CPU's may also be configured to support coherent memory transactions across CPU's.
Abstract:
In accordance with embodiments disclosed herein, there is provided systems and methods for providing a virtual shared cache mechanism. A processing device includes a plurality of clusters allocated into a virtual private shared cache. Each of the clusters includes a plurality of cores and a plurality of cache slices co-located within the plurality of cores. The processing device also includes a virtual shared cache including the plurality of clusters such that the cache data in the plurality of cache slices is shared among the plurality of clusters.
Abstract:
An apparatus and method are described for distributed snoop filtering. For example, one embodiment of a processor comprises: a plurality of cores to execute instructions and process data; first snoop logic to track a first plurality of cache lines stored in a mid-level cache (“MLC”) accessible by one or more of the cores, the first snoop logic to allocate entries for cache lines stored in the MLC and to deallocate entries for cache lines evicted from the MLC, wherein at least some of the cache lines evicted from the MLC are retained in a level 1 (L1) cache; and second snoop logic to track a second plurality of cache lines stored in a non-inclusive last level cache (NI LLC), the second snoop logic to allocate entries in the NI LLC for cache lines evicted from the MLC and to deallocate entries for cache lines stored in the MLC, wherein the second snoop logic is to store and maintain a first set of core valid bits to identify cores containing copies of the cache lines stored in the NI LLC.
Abstract:
A physical layer (PHY) is coupled to a serial, differential link that is to include a number of lanes. The PHY includes a transmitter and a receiver to be coupled to each lane of the number of lanes. The transmitter coupled to each lane is configured to embed a clock with data to be transmitted over the lane, and the PHY periodically issues a blocking link state (BLS) request to cause an agent to enter a BLS to hold off link layer flit transmission for a duration. The PHY utilizes the serial, differential link during the duration for a PHY associated task selected from a group including an in-band reset, an entry into low power state, and an entry into partial width state
Abstract:
In one embodiment, the present invention includes a multicore processor having a plurality of cores, a shared cache memory, an integrated input/output (IIO) module to interface between the multicore processor and at least one IO device coupled to the multicore processor, and a caching agent to perform cache coherency operations for the plurality of cores and the IIO module. Other embodiments are described and claimed.
Abstract:
A method and apparatus to monitor architecture events is disclosed. The architecture events are linked together via a push bus mechanism with each architectural event having a designated time slot. There is at least one branch of the push bus in each core. Each branch of the push bus may monitor one core with all the architectural events. All the data collected from the events by the push bus is then sent to a power control unit.