摘要:
Methods and apparatus to manage credits in a computing system with multiple banks of shared cache are described. In one embodiment, a credit request from a processor core is translated into a physical credit that corresponds to one of the multiple banks of shared cache.
摘要:
Methods and apparatus to manage credits in a computing system with multiple banks of shared cache are described. In one embodiment, a credit request from a processor core is translated into a physical credit that corresponds to one of the multiple banks of shared cache.
摘要:
A method and apparatus to monitor architecture events is disclosed. The architecture events are linked together via a push bus mechanism with each architectural event having a designated time slot. There is at least one branch of the push bus in each core. Each branch of the push bus may monitor one core with all the architectural events. All the data collected from the events by the push bus is then sent to a power control unit.
摘要:
A method and apparatus to monitor architecture events is disclosed. The architecture events are linked together via a push bus mechanism with each architectural event having a designated time slot. There is at least one branch of the push bus in each core. Each branch of the push bus may monitor one core with all the architectural events. All the data collected from the events by the push bus is then sent to a power control unit.
摘要:
A method and apparatus to monitor architecture events is disclosed. The architecture events are linked together via a push bus mechanism with each architectural event having a designated time slot. There is at least one branch of the push bus in each core. Each branch of the push bus may monitor one core with all the architectural events. All the data collected from the events by the push bus is then sent to a power control unit.
摘要:
A method and apparatus to monitor architecture events is disclosed. The architecture events are linked together via a push bus mechanism with each architectural event having a designated time slot. There is at least one branch of the push bus in each core. Each branch of the push bus may monitor one core with all the architectural events. All the data collected from the events by the push bus is then sent to a power control unit.
摘要:
A discussion of a dynamic configuration for a prefetcher is proposed. For example, a thread specific latency metric is calculated and provides dynamic feedback to the software on a per thread basis via the configuration and status registers. Likewise, the software can optionally use the information from the registers to dynamically configure the prefetching behavior and allows the software to be able to both query the performance and configure the prefetcher.
摘要:
According to one embodiment of the invention, a method is disclosed for selecting a first subset of a plurality of cache ways in a cache for storing hardware threads identified as high priority hardware threads for processing by a multi-threaded processor in communication with the cache; assigning high priority hardware threads to the selected first subset; monitoring a cache usage of a high priority hardware thread assigned to the selected first subset of plurality of cache ways; and reassigning the assigned high priority hardware thread to any cache way of the plurality of cache ways if the cache usage of the high priority hardware thread exceeds a predetermined inactive cache usage threshold value based on the monitoring.
摘要:
In one embodiment, the present invention includes a multicore processor having a variable frequency domain including a plurality of cores and at least a portion of non-core circuitry of the processor. This non-core portion can include a cache memory, a cache controller, and an interconnect structure. In addition to this variable frequency domain, the processor can further have a fixed frequency domain including a power control unit (PCU). This unit may be configured to cause a frequency change to the variable frequency domain without draining the non-core portion of pending transactions. Other embodiments are described and claimed.
摘要:
In one embodiment, link logic of a multi-chip processor (MCP) formed using multiple processors may interface with a first point-to-point (PtP) link coupled between the MCP and an off-package agent and another PtP link coupled between first and second processors of the MCP, where the on-package PtP link operates at a greater bandwidth than the first PtP link. Other embodiments are described and claimed.