摘要:
Methods for selecting a line to evict from a data storage system are provided. A computer system implementing a method for selecting a line to evict from a data storage system is also provided. The methods include selecting an uncached class line for eviction prior to selecting a cached class line for eviction.
摘要:
A processor transmits clean castout messages indicating that a cache line is not dirty and is no longer being stored by a lowest level cache of the processor. An external cache receives the clean castout messages and manages cache lines based in part on the clean castout messages.
摘要:
Embodiments of the present invention are directed to enhancing VPAR monitors to allow an active VPAR to be moved from one machine to another, as well as to enhancing virtual-machine monitors to move active VPARs from one machine to another. Because traditional VPAR monitors lack access to many computational resources and to executing-operating-system state, VPAR movement is carried out primarily by specialized routines executing within active VPARs, unlike the movement of guest operating systems between machines carried out by virtual-machine-monitor routines.
摘要:
A switch module having shared memory that is allocated to other blade servers. A memory controller partitions and enables access to partitions of the shared memory by requesting blade servers.
摘要:
Methods for selecting a line to evict from a data storage system are provided. A computer system implementing a method for selecting a line to evict from a data storage system is also provided. The methods include selecting an uncached class line for eviction prior to selecting a cached class line for eviction.
摘要:
A cache is provided for operatively coupling a processor with a main memory. The cache includes a cache memory and a cache controller operatively coupled with the cache memory. The cache controller is configured to receive memory requests to be satisfied by the cache memory or the main memory. In addition, the cache controller is configured to process cache activity information to cause at least one of the memory requests to bypass the cache memory.
摘要:
A method and apparatus pass messages between server and client applications using a fault tolerant storage system (FTSS). The interconnection fabric that couples the FTSS to the computer systems that host the client and server applications may also be used to carry messages. A networked system capable of hosting a distributed application includes a plurality of computer systems coupled to an FTSS via an FTSS interconnection fabric. The FTSS not only processes file-related I/O transactions, but also includes several message agents to facilitate message transfer in a reliable and fault tolerant manner. The message agents include a conversational communication agent, an event-based communication agent, a queue-based communication agent, a request/reply communication agent, and an unsolicited communication agent. The highly reliable and fault tolerant nature of the FTSS ensures that the FTSS can guarantee delivery of a message transmitted from a sending computer system to a destination computer system. As soon as a message is received by the FTSS from a sending computer system, the message is committed to a nonvolatile fault tolerant write cache. Thereafter, the message is written to a redundant array of independent disks (RAID) of the FTSS, and processed by one of the message agents.
摘要:
A cache system improves performance by limiting the number of dirty entries in a cache. The cache system may be further improve performance by limiting the number of dirty entries in a cache that might be subject to a cache-to-cache transfer. In a first example, a cache system counts the total number of dirty entries in the cache and preemptively evicts at least one dirty entry when the count exceeds a predetermined threshold. In a variation, a cache system counts dirty cache entries that result from a cache-to-cache transfer, and evicts at least one dirty entry that results from a cache-to-cache transfer when the number exceeds a predetermined threshold. For either system, the predetermined threshold may be dynamically varied to determine a value that optimizes performance.
摘要:
Address generating apparatus which uses narrow data paths for generating a wide logical address and which also provides for programs to access very large shared data structures outside their normally available addressing range and over an extended range of addresses. Selective indexed addressing is employed for providing index data which is also used for deriving variable dimension override data. During address generation, selected index data is added to a displacement provided by an instruction for deriving a dimension override value as well as an offset. The derived dimension override value is used to selectively access an address locating entry in a table of entries corresponding to the applicable program. The resulting accessed address locating entry is in turn used to determine the particular portion of memory against which the offset is to be applied.
摘要:
In at least some examples, a computing node includes a processor and a local memory coupled to the processor. The computing node also includes a reflective memory bridge coupled to the processor. The reflective memory bridge maps to an incoming region of the local memory assigned to at least one external computing node and maps to an outgoing region of the local memory assigned to at least one external computing node.