Abstract:
A shared mesh comprises a mesh station. The mesh station is used to couple to at least a first core component and a second core component. The mesh station includes a logic unit. The mesh station is shared by at least the first core component and the second core component. A memory is coupled to the mesh station.
Abstract:
Methods and apparatus relating to multiple-queue multiple-resource entry sleep and wakeup for power savings and bandwidth conservation in a retry based pipeline are described. In one embodiment, a bit indicates whether a corresponding queue entry is asleep or awake with respect to arbitration for resources in a retry based pipeline. Furthermore, multiple entries from different queues may be grouped together and multiple resources may be grouped together. Other embodiments are also disclosed.
Abstract:
A shared mesh comprises a mesh station. The mesh station is used to couple to at least a first core component and a second core component. The mesh station includes a logic unit. The mesh station is shared by at least the first core component and the second core component. A memory is coupled to the mesh station.
Abstract:
In accordance with embodiments disclosed herein, there is provided systems and methods for providing a virtual shared cache mechanism. A processing device includes a plurality of clusters allocated into a virtual private shared cache. Each of the clusters includes a plurality of cores and a plurality of cache slices co-located within the plurality of cores. The processing device also includes a virtual shared cache including the plurality of clusters such that the cache data in the plurality of cache slices is shared among the plurality of clusters.
Abstract:
Examples describe an egress port manager that uses an adaptive jitter selector to apply a jitter threshold level for a buffer, wherein the jitter threshold level is to indicate when egress of a packet segment from the buffer is allowed, wherein a packet segment comprises a packet header and wherein the jitter threshold level is adaptive based on a switch fabric load. In some examples, the jitter threshold level is to indicate a number of segments for the buffer's head of line (HOL) packet that are to be in the buffer or indicate a timer that starts at a time of issuance of a first read request for a first segment of the packet in the buffer. In some examples, the jitter threshold level is not more than a maximum transmission unit (MTU) size associated with the buffer. In some examples, a fetch scheduler is used to adapt an amount of interface overspeed to reduce packet fetching latency while attempting to prevent fabric saturation based on a switch fabric load level, wherein the fetch scheduler is to control the jitter threshold level for the buffer by forcing a jitter threshold level based on switch fabric load level and latency profile of the switch fabric.
Abstract:
Examples describe a manner of scheduling packet segment fetches at a rate that is based on one or more of: a packet drop indication, packet drop rate, incast level, operation of queues in SAF or VCT mode, or fabric congestion level. Headers of packets can be fetched faster than payload or body portions of packets and processed prior to queueing of all body portions. In the event a header is identified as droppable, fetching of the associated body portions can be halted and any body portion that is queued can be discarded. Fetch overspeed can be applied for packet headers or body portions associated with packet headers that are approved for egress.
Abstract:
In accordance with embodiments disclosed herein, there is provided systems and methods for providing a virtual shared cache mechanism. A processing device includes a plurality of clusters allocated into a virtual private shared cache. Each of the clusters includes a plurality of cores and a plurality of cache slices co-located within the plurality of cores. The processing device also includes a virtual shared cache including the plurality of clusters such that the cache data in the plurality of cache slices is shared among the plurality of clusters.
Abstract:
Methods and apparatus relating to multiple-queue multiple-resource entry sleep and wakeup for power savings and bandwidth conservation in a retry based pipeline are described. In one embodiment, a bit indicates whether a corresponding queue entry is asleep or awake with respect to arbitration for resources in a retry based pipeline. Furthermore, multiple entries from different queues may be grouped together and multiple resources may be grouped together. Other embodiments are also disclosed.
Abstract:
Methods and apparatus relating to an inter-queue anti-starvation mechanism with dynamic deadlock avoidance in a retry based pipeline are described. In one embodiment, logic may arbitrate between two queues based on various rules. The queues may store data including local or remote requests, data responses, non-data responses, external interrupts, etc. Other embodiments are also disclosed.
Abstract:
Methods, apparatus and systems for facilitating one-way ordering of otherwise independent message classes. A one-way message ordering mechanism facilitates one-way ordering of messages of different message classes sent between interconnects employing independent pathways for the message classes. In one aspect, messages of a second message class may not pass messages of a first message class. Moreover, when messages of the first and second classes are received in sequence, the ordering mechanism ensures that messages of the first class are forwarded to, and received at, a next hop prior to forwarding messages of the second class.