摘要:
An information distribution system includes an interconnect and multiple data processing nodes coupled to the interconnect. Each data processing node includes mass storage and a cache. Each data processing node also includes interface logic configured to receive signals from the interconnect and to apply the signals from the interconnect to affect the content of the cache, and to receive signals from the mass storage and to apply the signals from the mass storage to affect the content of the cache. The content of the mass storage and cache of a particular node may also be provided to other nodes of the system, via the interconnect.
摘要:
A system and method for maintaining storage object consistency across a distributed storage network including a migratable repository of last resort which stores a last or only remaining data replica that may not be deleted. The method includes the steps of monitoring data requests to the repository of last resort, deciding whether to move the repository of last resort, and migrating the repository of last resort.
摘要:
An agent, in response to a write to a shared block, is configured to initiate a read exclusive transaction on an interface on which the agent communicates. Additionally, the agent is configured to indicate, to a responding agent or agents on the interface, that a data transfer is not required from the responding agent or agents in response to the read exclusive transaction. In one embodiment, the agent indicates to the responding agents that a data transfer is not required in a response phase of the transaction. Specifically, the agent may respond in such a way that the agent indicates that it will provide the data (i.e. that the agent will provide the data to itself). For example, the agent may respond with an exclusive ownership indication. On the interface for such an embodiment, an exclusive ownership response may require that the agent having exclusive access respond with the data.
摘要:
An agent, in response to a write to a shared block, is configured to initiate a read exclusive transaction on an interface on which the agent communicates. Additionally, the agent is configured to indicate, to a responding agent or agents on the interface, that a data transfer is not required from the responding agent or agents in response to the read exclusive transaction. In one embodiment, the agent indicates to the responding agents that a data transfer is not required in a response phase of the transaction. Specifically, the agent may respond in such a way that the agent indicates that it will provide the data (i.e. that the agent will provide the data to itself). For example, the agent may respond with an exclusive ownership indication. On the interface for such an embodiment, an exclusive ownership response may require that the agent having exclusive access respond with the data.
摘要:
A multiprocessor digital data processing system comprises a plurality of processing cells arranged in a hierarchy of rings. The system selectively allocates storage and moves exclusive data copies from cell to cell in response to access requests generated by the cells. Routing elements are employed to selectively broadcast data access requests, updates and transfers on the rings.
摘要:
An apparatus for associating cache memories with processors within a multiprocessor data processing system is disclosed. The multiprocessor data processing system includes multiple processing units and multiple cache memories. Each of the cache memories includes a cache memory controller, and each cache memory controller includes a mode register. Each mode register has multiple processing unit fields, and each of the processing unit fields is associated with one of the processing units for indicating whether or not data from an associated processing unit should be cached by a cache memory associated to a corresponding cache memory controller.
摘要:
A portion of the global memory of a multiprocessing computer system is allocated to each node, called local memory space. Data from a remote node may be copies to local memory space of a node such that accesses to the data may be performed locally rather than globally. The copies data is referred to as a shadow page. The global address of the data is translated to a local physical address for the node to which the data is copied. To reduce the size of the translation tables for converting between global addresses and local physical addresses, the page to which shadow copies may be stored and which global addresses may be converted to local physical addresses may be restricted. Multiple page of local memory space may be allocated to one entry of a local physical address to global address (LPA2GA) table. When a page is allocated to store shadow pages, an entry in the LPA2GA table associated with that page is marked as unavailable. Accordingly, new translations may not be stored to that entry of the LPA2GA table and other pages associated with that entry may not be allocated to store shadow pages. In a similar manner, multiple pages of the global address space are mapped to an entry in a global address to local physical address (GA2LPA) translation table. When data corresponding to a page within the global address space is stored as a shadow page, the entry associated with the global address is marked as unavailable. Accordingly, other pages associated with that entry of the GA2LPA table may not be stored as shadow pages because the entry is not available. The local copy of the data is not stored and the node must access the data globally. To decrease the probability that an entry is not available for a page, the GA2LPA table may be implemented as a set associative table. To further increase the availability of entries in the GA2LPA table, a skewed-associative cache that implements an insertion algorithm that realigns the translations in the table to maximize the utilization of the available entries is implemented.
摘要:
Digital multiprocessor methods and apparatus comprise a plurality of processors, including a first processor for normally processing an instruction stream including instructions from a first instruction source. At least one of the processors can transmit inserted-instructions to the first processor. Inserted-instructions are executed by the first processor in the same manner as, and without affecting the sequence of, instructions from the first instruction source. The first instruction source can be a memory element, including an instruction cache element for storing digital values representative of instructions and program steps, or an execution unit (CEU) which asserts signals to the instruction cache element to cause instructions to be transmitted to the CEU. The processors include input/output (I/O) processors having direct memory access (DMA) insert elements, which respond to a peripheral device to generate DMA inserted-instructions. These DMA inserted-instructions are executable by the first processing element in the same manner as, and without affecting processing sequence of, the instructions from the first instruction source.
摘要:
The present invention provides a hybrid Non-Uniform Memory Architecture (NUMA) and Cache-Only Memory Architecture (COMA) caching architecture together with a cache-coherent protocol for a computer system having a plurality of sub-systems coupled to each other via a system interconnect. In one implementation, each sub-system includes at least one processor, a page-oriented COMA cache and a line-oriented hybrid NUMA/COMA cache. Such a hybrid system provides flexibility and efficiency in caching both large and small, and/or sparse and packed data structures. Each sub-system is able to independently store data in COMA mode or in NUMA mode. When caching in COMA mode, a sub-system allocates a page of memory space and then stores the data within the allocated page in its COMA cache. Depending on the implementation, while caching in COMA mode, the sub-system may also store the same data in its hybrid cache for faster access. Conversely, when caching in NUMA mode, the sub-system stores the data, typically a line of data, in its hybrid cache.
摘要:
A multi-core system that includes a 64-bit cache storage and a 32-bit memory storage that stores a 32-bit cache object directory. One or more cache engines execute on cores of the multi-core system to retrieve objects from the 64-bit cache, create cache directory objects, insert the created cache directory object into the cache object directory, and search for cache directory objects in the cache object directory. When an object is stored in the 64-bit cache, a cache engine can create a cache directory object that corresponds to the cached object and can insert the created cache directory object into an instance of a cache object directory. A second cache engine can receive a request to access the cached object and can identify a cache directory object in the instance of the cache object directory, using a hash key calculated based on one or more attributes of the cached object.