Abstract:
An arrangement is provided for an external agent to initiate data prefetches from a system memory to a cache associated with a target processor, which needs the data to execute a program, in a computing system. When the external agent has data, it may create and issue a prefetch directive. The prefetch directive may be sent along with system interconnection transactions or sent as a separate transaction to devices including the target processor in the system. When receiving and recognizing the prefetch directive, a hardware prefetcher associated with the target processor may issue a request to the system memory to prefetch data to the cache. The target processor can access data in the cache more efficiently than it accesses data in the system memory. Some pre-processing may also be associated with the data prefetch.
Abstract:
Methods, apparatus, and systems for implementing coordinated idle power management in glueless and clustered systems. Components for facilitating coordination of package idle power state between sockets in a glueless system such as a server platform include a master entity in one socket (i.e., processor) and a slave entity in each socket participating in the power management coordination. Each slave collects idle status inputs from various sources and when the socket cores are sufficiently idle, it makes a request to the master to enter a deeper idle power state. The master coordinates global power management operations in response to the slave requests, including broadcasting a command with a target latency to all of the slaves to allow the processors to enter reduced power (i.e., idle) states in a coordinated manner. Communications between the entities is facilitated using messages transported over existing interconnects and corresponding protocols, enabling the benefits associated with the disclosed embodiments to be implemented using existing designs.
Abstract:
A caching input/output hub includes a host interface to connect with a host. At least one input/output interface is provided to connect with an input/output device. A write cache manages memory writes initiated by the input/output device. At least one read cache, separate from the write cache, provides a low-latency copy of data that is most likely to be used. The at least one read cache is in communication with the write cache. A cache directory is also provided to track cache lines in the write cache and the at least one read cache. The cache directory is in communication with the write cache and the at least one read cache.
Abstract:
In one embodiment, a system includes an (input/output) I/O domain and a compute domain. The I/O domain includes an I/O agent and a I/O domain caching agent. The compute domain includes a compute domain caching agent and a compute domain cache hierarchy. The I/O agent issues an ownership request to the compute domain caching agent to obtain ownership of a cache line in the compute domain cache hierarchy. In response to the ownership request, the compute domain caching agent places the cache line in the compute domain cache hierarchy in a placeholder state. The placeholder state reserves the cache line for performance of a write operation by the I/O agent. The compute domain caching agent writes data received from the I/O agent to the cache line in the compute domain cache hierarchy and transitions the state of the cache line out of the placeholder state.
Abstract:
Embodiments of an invention for securing transmissions between processor packages are disclosed. In one embodiment, an apparatus includes an encryption unit to encrypt first content to be transmitted from the apparatus to a processor package directly through a point-to-point link.