摘要:
A cache write back operation, write back modified data to memory from cache data array to fix inconsistency between them can be cancelled by the results of a comparison of the progress between a write back and snoop push or snoop kill operation. Write back is intended to make an empty slot to accommodate a reload data due to a cache miss and since a snoop push or snoop kill operation creates an invalid entry in the cache, write back is not needed. If simultaneous push or kill with write back operation exist, then write back machine is late cancelled. System performance improves due to preserving more cache lines in cache data array for possible future reuse.
摘要:
The present invention provides for managing an atomic facility cache write back state machine. A first write back selection is made. A reservation pointer pointing to the reserved line in the atomic facility data array is established. A next write back selection is made. An entry for the reservation point for the next write back selection is removed, whereby the valid reservation line is precluded form being selected for the write back. This prevents a modified command from being invalidated.
摘要:
The present invention provides parallel processing of write-back and reload operations in a cache system and optimum circuit utilisation by implementing moveable buffers in a cache storage. However, the data and associated pointers are not permanently assigned to a particular buffer—hence, the buffers can move logically around in the facility. Reload pointer is pointing to an empty entry so that retrieved data from the main memory or equal hierarchy cache on cache miss can be always be accommodated. Victim pointer is always pointing to a modified entry for the next candidate of write-back operation. Write-back operation is necessary with reload operation in order to make a free entry for further cache miss handling unless free entry exists. Because of these moveable pointers for reload buffer and victim buffer and integrated write-back buffer in the cache, intra cache data movement is not necessary which improves cache miss handling performance.
摘要:
The present invention provides a method and apparatus for creating memory barriers in a Direct Memory Access (DMA) device. A memory barrier command is received and a memory command is received. The memory command is executed based on the memory barrier command. A bus operation is initiated based on the memory barrier command. A bus operation acknowledgment is received based on the bus operation. The memory barrier command is executed based on the bus operation acknowledgment. In a particular aspect, memory barrier commands are direct memory access sync (dmasync) and direct memory access enforce in-order execution of input/output (dmaeieio) commands.
摘要:
A method, an apparatus, and a computer program are provided for controlling memory access. Direct Memory Access (DMA) units have become commonplace in a number of bus architectures. However, managing limited system resources has become a challenge with multiple DMA units. In order to mange the multitude of commands generated and preserve dependencies, embedded flags in commands or a barrier command are used. These operations then can control the order in which commands are executed so as to preserve dependencies.
摘要:
Disclosed is a coherent cache system that operates in conjunction with non-homogeneous processing units. A set of processing units of a first configuration has conventional cache and directly accesses common or shared system physical and virtual address memory through the use of a conventional MMU (Memory Management Unit). Additional processors of a different configuration and/or other devices that need to access system memory are configured to store accessed data in compatible caches. Each of the caches is compatible with a given protocol coherent memory management bus interspersed between the caches and the system memory.
摘要:
The present invention provides a method for a processor to write data to a cache or other fast memory, without also writing it to main memory. Further, the data is “locked” into the cache or other fast memory until it is loaded for use. Data remains in the locking cache until it is specifically overwritten under software control. The locking cache or other fast memory can be used as additional system memory. In an embodiment of the invention, the locking cache is one or more sets of ways, but not all of the sets or ways, of a multiple set associative cache.
摘要:
The present invention provides for atomic update primitives in an asymmetric single-chip heterogeneous multiprocessor computer system having a shared memory with DMA transfers. At least one lock line command is generated from a set comprising a get lock line command with reservation, a put lock line conditional command, and a put lock line unconditional command.
摘要:
The present invention provides a method for a processor to write data to a cache or other fast memory, without also writing it to main memory. Further, the data is “locked” into the cache or other fast memory until it is loaded for use. Data remains in the locking cache until it is specifically overwritten under software control. The locking cache or other fast memory can be used as additional system memory. In an embodiment of the invention, the locking cache is one or more sets of ways, but not all of the sets or ways, of a multiple set associative cache.
摘要:
Disclosed is an apparatus for controlling or managing the transmission of data packets over a multiplexed communication path, referred to herein as a bus, on a priority basis up to a given authorized BW (Bandwidth), in a given operational time period, for presently authorized devices or applications. Non-managed (not presently authorized) bus requests are handled in a prior art “best effort” basis.