摘要:
An opaque memory region for a bridge of an I/O adapter. The opaque memory region is inaccessible to memory transactions which traverse the bridge either from a primary bus to secondary bus or secondary bus to primary bus. As a result, memory transactions which target the opaque memory region are ignored by the bridge, allowing for the same address to exist on both sides of the bridge with different data stored in each. The implementation of the opaque memory region provides a means to complete memory transactions within I/O adapter subsystem memory, hence, relieving host computer system memory resources. In addition, a number of I/O adapters can be used in a host computer system where the host and all the I/O devices use some of the same memory addresses.
摘要:
Private devices are implemented on the secondary interface of PCI bridge by re-routing the activation of device select signals (IDSEL) during the address phase of a Type 0 configuration operation on the secondary bus in response to a Type 1 configuration operation on its primary bus. Under control of a mask register and device select reroute circuit, if a configuration command on the primary interface attempts to activate the IDSEL line associated with one of the private, or reroute, devices on the secondary interface, a different IDSEL is activated to select a monitoring device on the secondary interface.
摘要:
To prevent data performance impacts when dealing with target devices that can only transfer data for a limited number of bytes before disconnecting, the invention implements a short term data cache on the bridge. Using this feature, the bridge will cache additional data beyond a predetermined quantity of data following a disconnect with the requesting device. As such, the bridge may continue to prefetch additional data up to an amount specified by a prefetch read byte count and return the additional data should the requesting device request additional data resuming at the point of disconnect. However, the bridge will discard the additional data when at least one of the following occurs: a) the requesting device disconnects data transfer, and b) a further READ request that resumes at the point of disconnect is not received within a predetermined time.
摘要:
Embodiments relate to storing data to a system memory. An aspect includes accessing successive entries of a cache directory having a plurality of directory entries by a stepper engine, where access to the cache directory is given a lower priority than other cache operations. It is determined that a specific directory entry in the cache directory has a change line state that indicates it is modified. A store operation is performed to send a copy of the specific corresponding cache entry to the system memory as part of a cache management function. The specific directory entry is updated to indicate that the change line state is unmodified.
摘要:
Concurrent refresh in a cache memory includes calculating a refresh time interval at a centralized refresh controller, the centralized refresh controller being common to all cache memory banks of the cache memory, transmitting a starting time of the refresh time interval to a bank controller, the bank controller being local to, and associated with, only one cache memory bank of the cache memory, sampling a continuous refresh status indicative of a number of refreshes necessary to maintain data within the cache memory bank associated with the bank controller, requesting a gap in a processing pipeline of the cache memory to facilitate the number of refreshes necessary, receiving a refresh grant in response to the requesting, and transmitting an encoded refresh command to the bank controller, the encoded refresh command indicating a number of refresh operations granted to the cache memory bank associated with the bank controller.
摘要:
An input/output bus bridge and command queuing system includes an external interrupt router for receiving interrupt commands from bus unit controllers (BUCs) and responds with end of interrupt (EOI), interrupt return (INR) and interrupt reissue (IRR) commands. The interrupt router includes a first command queue for ordering EOI commands and a second command queue for ordering INR and IRR commands. A first in first out (FIFO) command queue orders bus memory mapped input output (MMIO) commands. The EOI commands are directed from the first command queue to the input of the FIFO command queue. The EOI commands and the MMIO commands are directed from the command queue to an input output bus and the INR and IRR commands are directed from the second command queue to the input output bus. In this way, strict ordering of EOI commands relative to MMIO accesses is maintained while simultaneously allowing INR and IRR commands to bypass enqueued MMIO accesses.
摘要:
A technique is provided for a cache. A cache controller accesses a set in a congruence class and determines that the set contains corrupted data based on an error being found. The cache controller determines that a delete parameter for taking the set offline is met and determines that a number of currently offline sets in the congruence class is higher than an allowable offline number threshold. The cache controller determines not to take the set in which the error was found offline based on determining that the number of currently offline sets in the congruence class is higher than the allowable offline number threshold.
摘要:
Embodiments relate to accessing a cache line on a multi-level cache system having a system memory. Based on a request for exclusive ownership of a specific cache line at the local node, requests are concurrently sent to the system memory and remote nodes of the plurality of nodes for the specific cache line by the local node. The specific cache line is found in a specific remote node. The specific remote node is one of the remote nodes. The specific cache line is removed from the specific remote node for exclusive ownership by another node. Based on the specified node having the specified cache line in ghost state, any subsequent fetch request is initiated for the specific cache line from the specific node encounters the ghost state. When the ghost state is encountered, the subsequent fetch request is directed only to nodes of the plurality of nodes.
摘要:
Embodiments relate to embedded Dynamic Random Access Memory (eDRAM) refresh rates in a high performance cache architecture. An aspect includes receiving a plurality of first signals. A refresh request is transmitted via a refresh requestor to a cache memory at a first refresh rate which includes an interval, including a subset of the first signals. The first refresh rate corresponds to a maximum refresh rate. A refresh counter is reset based on receiving a second signal. The refresh counter is incremented after receiving each of a number of refresh requests. A current count is transmitted from a refresh counter to the refresh requestor based on receiving a third signal. The refresh request is transmitted at a second refresh rate, which is less than the first refresh rate. The refresh request is transmitted based on receiving the current count from the refresh counter and determining that the current count is greater than a refresh threshold.
摘要:
Serializing instructions in a multiprocessor system includes receiving a plurality of processor requests at a central point in the multiprocessor system. Each of the plurality of processor requests includes a needs register having a requestor needs switch and a resource needs switch. The method also includes establishing a tail switch indicating the presence of the plurality of processor requests at the central point, establishing a sequential order of the plurality of processor requests, and processing the plurality of processor requests at the central point in the sequential order.