摘要:
A computer system has a plurality of processor nodes and a plurality of input/output nodes. Each processor node includes a multiplicity of processor cores, an interface to a local memory system and a protocol engine implementing a predefined cache coherence protocol. Each processor core has an associated memory cache for caching memory lines of information. Each input/output node includes no processor cores, an input/output interface for interfacing to an input/output bus or input/output device, a memory cache for caching memory lines of information and an interface to a local memory subsystem. The local memory subsystem of each processor node and input/output node stores a multiplicity of memory lines of information. The protocol engine of each processor node and input/output node implements the same predefined cache coherence protocol.
摘要:
A protocol engine is for use in each node of a computer system having a plurality of nodes. Each node includes an interface to a local memory subsystem that stores memory lines of information, a directory, and a memory cache. The directory includes an entry associated with a memory line of information stored in the local memory subsystem. The directory entry includes an identification field for identifying sharer nodes that potentially cache the memory line of information. The identification field has a plurality of bits at associated positions within the identification field. Each respective bit of the identification field is associated with one or more nodes. The protocol engine furthermore sets each bit in the identification field for which the memory line is cached in at least one of the associated nodes. In response to a request for exclusive ownership of a memory line, the protocol engine sends an initial invalidation request to no more than a first predefined number of the nodes associated with set bits in the identification field of the directory entry associated with the memory line.
摘要:
A protocol engine is for use in each node of a computer system having a plurality of nodes. Each node includes an interface to a local memory subsystem that stores memory lines of information, a directory, and a memory cache. The directory includes an entry associated with a memory line of information stored in the local memory subsystem. The directory entry includes an identification field for identifying sharer nodes that potentially cache the memory line of information. The identification field has a plurality of bits at associated positions within the identification field. Each respective bit of the identification field is associated with one or more nodes. The protocol engine furthermore sets each bit in the identification field for which the memory line is cached in at least one of the associated nodes. In response to a request for exclusive ownership of a memory line, the protocol engine sends an initial invalidation request to no more than a first predefined number of the nodes associated with set bits in the identification field of the directory entry associated with the memory line.
摘要:
A computer system has a plurality of processor nodes and a plurality of input/output nodes. Each processor node includes a multiplicity of processor cores, an interface to a local memory system and a protocol engine implementing a predefined cache coherence protocol. Each processor core has an associated memory cache for caching memory lines of information. Each input/output node includes no processor cores, an input/output interface for interfacing to an input/output bus or input/output device, a memory cache for caching memory lines of information and an interface to a local memory subsystem. The local memory subsystem of each processor node and input/output node stores a multiplicity of memory lines of information. The protocol engine of each processor node and input/output node implements the same predefined cache coherence protocol.
摘要:
The present invention relates generally to multiprocessor computer system, and particularly to a multiprocessor system designed to be highly scalable, using efficient cache coherence logic and methodologies. More specifically, the present invention is a system and method including a plurality of processor nodes configured to execute a cache coherence protocol that avoids the use of negative acknowledgment messages (NAKs) and ordering requirements on the underlying transaction-message interconnect/network and services most 3-hop transactions with only a single visit to the home node.
摘要:
L1 cache synonyms in a two-level cache system are detected and resolved by logic in the L2 cache. Duplicate copies of the L1 cache tags and state (“Dtags”) are maintained in the L2 cache. After a miss occurs in the L1 cache, the Dtags in the second-level cache that correspond to all possible synonym locations in the L1 cache are searched for synonyms. If a synonym is found, the L2 cache notifies the L1 cache where the requested cache line can be found in the L1 cache. The L1 cache then copies the cache line from the location where the synonym was found to the location where the miss occurred, and it invalidates the cache line at the original location. The Dtags in the second-level cache are updated to reflect the changes made in the L1 cache.
摘要:
Systems, methods and computer program products for generalizing a user-submitted query by forming one or more variants of the user-submitted query to generate one or more other queries, each of the one or more other queries being different from the user-submitted query. A generalized quality of result statistic is derived for a first document from respective data associated with each of the other queries, each respective data being indicative of user behavior relative to the first document as a search result for the associated other query. The generalized quality of result statistic is provided as the quality of result statistic input to a document ranking process for the first document and the user-submitted query.
摘要:
An information system that includes mechanisms for assigning incoming access transactions to individual access subsystems based on an analysis of the incoming access transactions. The analysis and assignment of the incoming access transactions may be used to minimize loss of cached data during power reduction in an information system.
摘要:
In general, the subject matter described in this specification can be embodied in a method that includes: obtaining user feedback associated with quality of an electronic document; adjusting a measure of relevance for the electronic document based on a temporal element of the user feedback; and outputting the measure of relevance to a ranking engine for ranking of search results, including the electronic document, for a search for which the electronic document is returned. Obtaining the user feedback can include receiving user selections of documents presented by a document search service, the method can include evaluating the user selections in accordance with an implicit user feedback model to determine the measure of relevance, and adjusting the measure of relevance can include adjusting the measure of relevance in accordance with the implicit user feedback model.
摘要:
A data center is disclosed with power-aware adaptation that minimizes the performance impact of reducing the power consumption of individual nodes in the data center. A data center according to the present techniques includes a request redirector that obtains an access request for data stored on a set of storage devices and that distributes the access request to one of a set of access nodes in response to a priority of the access request and a rank of each access node. A data center according to the present techniques also includes a power manager that performs a power adaptation in the data center by selecting access nodes for power reduction based on the ranks of the access nodes. The judicious distribution of access requests to appropriately ranked nodes and the judicious selection of access nodes for power reduction enhances the likelihood that higher priority cached data is not lost during power adaptation.