摘要:
The present invention provides parallel processing of write-back and reload operations in a cache system and optimum circuit utilisation by implementing moveable buffers in a cache storage. However, the data and associated pointers are not permanently assigned to a particular buffer—hence, the buffers can move logically around in the facility. Reload pointer is pointing to an empty entry so that retrieved data from the main memory or equal hierarchy cache on cache miss can be always be accommodated. Victim pointer is always pointing to a modified entry for the next candidate of write-back operation. Write-back operation is necessary with reload operation in order to make a free entry for further cache miss handling unless free entry exists. Because of these moveable pointers for reload buffer and victim buffer and integrated write-back buffer in the cache, intra cache data movement is not necessary which improves cache miss handling performance.
摘要:
A cache write back operation, write back modified data to memory from cache data array to fix inconsistency between them can be cancelled by the results of a comparison of the progress between a write back and snoop push or snoop kill operation. Write back is intended to make an empty slot to accommodate a reload data due to a cache miss and since a snoop push or snoop kill operation creates an invalid entry in the cache, write back is not needed. If simultaneous push or kill with write back operation exist, then write back machine is late cancelled. System performance improves due to preserving more cache lines in cache data array for possible future reuse.
摘要:
The present invention provides for managing an atomic facility cache write back state machine. A first write back selection is made. A reservation pointer pointing to the reserved line in the atomic facility data array is established. A next write back selection is made. An entry for the reservation point for the next write back selection is removed, whereby the valid reservation line is precluded form being selected for the write back. This prevents a modified command from being invalidated.
摘要:
A method, an apparatus, and a computer program are provided for controlling memory access. Direct Memory Access (DMA) units have become commonplace in a number of bus architectures. However, managing limited system resources has become a challenge with multiple DMA units. In order to mange the multitude of commands generated and preserve dependencies, embedded flags in commands or a barrier command are used. These operations then can control the order in which commands are executed so as to preserve dependencies.
摘要:
A system and method are provided for setting up a direct memory access for a first processor. The system includes the first processor and a local memory. The local memory is coupled to the first processor. A first direct memory access controller (DMAC) is coupled to the first processor and the local memory. A system memory is in communication with the first DMAC. A second processor is in communication with the first DMAC such that the second processor sets up the first DMAC to handle data transfer between the local memory and the system memory. The second processor is interrupted when the first DMAC finishes handling the data transfer.
摘要:
The present invention provides a method and a system for providing cache management commands in a system supporting a DMA mechanism and caches. A DMA mechanism is set up by a processor. Software running on the processor generates cache management commands. The DMA mechanism carries out the commands, thereby enabling the software program management of the caches. The commands include commands for writing data to the cache, loading data from the cache, and for marking data in the cache as no longer needed. The cache can be a system cache or a DMA cache.
摘要:
A system and method for communicating command parameters between a processor and a memory flow controller are provided. The system and method make use of a channel interface as the primary mechanism for communicating between the processor and a memory flow controller. The channel interface provides channels for communicating with processor facilities, memory flow control facilities, machine state registers, and external processor interrupt facilities, for example. These channels may be designated as blocking or non-blocking. With blocking channels, when no data is available to be read from the corresponding registers, or there is no space available to write to the corresponding registers, the processor is placed in a low power “stall” state. The processor is automatically awakened, via communication across the blocking channel, when data becomes available or space is freed. Thus, the channels of the present invention permit the processor to stay in a low power state.
摘要:
The present invention provides a method and apparatus for creating memory barriers in a Direct Memory Access (DMA) device. A memory barrier command is received and a memory command is received. The memory command is executed based on the memory barrier command. A bus operation is initiated based on the memory barrier command. A bus operation acknowledgment is received based on the bus operation. The memory barrier command is executed based on the bus operation acknowledgment. In a particular aspect, memory barrier commands are direct memory access sync (dmasync) and direct memory access enforce in-order execution of input/output (dmaeieio) commands.
摘要:
The present invention provides for a cache-accessing system employing a binary tree with decision nodes. A cache comprising a plurality of sets is provided. A locking or streaming replacement strategy is employed for individual sets of the cache. A replacement management table is also provided. The replacement management table is employable for managing a replacement policy of information associated with the plurality of sets. A pseudo least recently used function is employed to determine the least recently used set of the cache, for such reasons as set replacement. An override signal line is also provided. The override signal is employable to enable an overwrite of a decision node of the binary tree. A value signal is also provided. The value signal is employable to overwrite the decision node of the binary tree.
摘要:
The present invention provides a method and apparatus for creating memory barriers in a Direct Memory Access (DMA) device. A memory barrier command is received and a memory command is received. The memory command is executed based on the memory barrier command. A bus operation is initiated based on the memory barrier command. A bus operation acknowledgment is received based on the bus operation. The memory barrier command is executed based on the bus operation acknowledgment. In a particular aspect, memory barrier commands are direct memory access sync (dmasync) and direct memory access enforce in-order execution of input/output (dmaeieio) commands.