摘要:
A cache memory logically associates a cache line with at least two cache sectors of a cache array wherein different sectors have different output latencies and, for a load hit, selectively enables the cache sectors based on their latency to output the cache line over successive clock cycles. Larger wires having a higher transmission speed are preferably used to output the cache line corresponding to the requested memory block. In the illustrative embodiment the cache is arranged with rows and columns of the cache sectors, and a given cache line is spread across sectors in different columns, with at least one portion of the given cache line being located in a first column having a first latency, and another portion of the given cache line being located in a second column having a second latency greater than the first latency. One set of wires oriented along a horizontal direction may be used to output the cache line, while another set of wires oriented along a vertical direction may be used for maintenance of the cache sectors. A given cache line is further preferably spread across sectors in different rows or cache ways. For example, a cache line can be 128 bytes and spread across four sectors in four different columns, each sector containing 32 bytes of the cache line, and the cache line is output over four successive clock cycles with one sector being transmitted during each of the four cycles.
摘要:
A processing unit for a multiprocessor data processing system includes a processor core including a store-through upper level cache, an instruction sequencing unit that fetches instructions for execution, a data register, and at least one instruction execution unit. The instruction execution unit, responsive to receipt of a load-reserve instruction from the instruction sequencing unit, executes the load-reserve instruction to determine a load target address. The processor core, responsive to the execution of the load-reserve instruction, performs a corresponding load-reserve operation by accessing the store-through upper level cache utilizing the load target address to cause data associated with the load target address to be loaded from the store-through upper level cache into the data register and by establishing a reservation for a reservation granule including the load target address.
摘要:
A data processing system includes a plurality of requestors and a memory controller for a system memory. In response to receiving from the requestor a read-type request targeting a memory block in the system memory, the memory controller protects the memory block from modification, and in response to an indication that the memory controller is responsible for servicing the read-type request, the memory controller transmits the memory block to the requestor. Prior to receipt of the memory block by the requestor, the memory controller ends protection of the memory block from modification, and the requestor begins protection of the memory block from modification. In response to receipt of the memory block, the requestor ends its protection of the memory block from modification.
摘要:
A cache coherency protocol that includes a modified-invalid (Mi) state, which enables execution of a DMA Claim or DClaim operation to assign sole ownership of a cache line to a device that is going to overwrite the entire cache line without cache-to-cache data transfer. The protocol enables completion of speculatively-issued full cache line writes without requiring cache-to-cache transfer of data on the data bus during a preceding DMA Claim or DClaim operation. The modified-invalid (Mi) state assigns sole ownership of the cache line to an I/O device that has speculatively-issued a DMA Write or a processor that has speculatively-issued a DCBZ operation to overwrite the entire cache line, and the Mi state prevents data being sent to the cache line from another cache since the data will most probably be overwritten.
摘要:
A method and apparatus for performing data prefetch in a multiprocessor system are disclosed. The multiprocessor system includes multiple processors, each having a cache memory. The cache memory is subdivided into multiple slices. A group of prefetch requests is initially issued by a requesting processor in the multiprocessor system. Each prefetch request is intended for one of the respective slices of the cache memory of the requesting processor. In response to the prefetch requests being missed in the cache memory of the requesting processor, the prefetch requests are merged into one combined prefetch request. The combined prefetch request is then sent to the cache memories of all the non-requesting processors within the multiprocessor system. In response to a combined clean response from the cache memories of all the non-requesting processors, data are then obtained for the combined prefetch request from a system memory.
摘要:
A processing unit for a multiprocessor data processing system includes a processor core and a lower level cache including a reservation logic that records reservations of the processor core. The reservation logic passes or fails store-conditional operations received from the processor core based upon whether the processor core has reservations for target store addresses of the store-conditional operations. The processor core includes a store-through upper level cache, a reservation register, and sequencer logic that, by reference to the reservation register, fails a store-conditional operation without communication with said reservation logic.
摘要:
A method, system, and processor chip design for reducing the latency between completing a LARX operation and receiving the associated STCX operation to complete the update to the cache line. Each entry of the store queue of the issuing processor is provided an additional tracking bit (priority bit). The priority bit is set whenever a STCX operation is placed within the entry. During selection of an entry for dispatch by the arbitration logic, the arbitration logic scans the value of the priority bits of each eligible entry. An entry with the priority bit set is given priority in the selection process within architectural rules. That entry is then selected for dispatch as early as is possible within the established rules.
摘要:
A data processing system includes a memory controller of a system memory that receives first and second castout operations both specifying a same address. In response to receiving said first and second castout operations, the memory controller performs a single update to the system memory.
摘要:
A method and processor system that substantially eliminates data bus operations when completing updates of an entire cache line with a full store queue entry. The store queue within a processor chip is designed with a series of AND gates connecting individual bits of the byte enable bits of a corresponding entry. The AND output is fed to the STQ controller and signals when the entry is full. When full entries are selected for dispatch to the RC machines, the RC machine is signaled that the entry updates the entire cache line. The RC machine obtains write permission to the line, and then the RC machine overwrites the entire cache line. Because the entire cache line is overwritten, the data of the cache line is not retrieved when the request for the cache line misses at the cache or when data goes state before write permission is obtained by the RC machine.
摘要:
A data processing system includes a memory controller of a system memory that receives first and second castout operations both specifying a same address. In response to receiving said first and second castout operations, the memory controller performs a single update to the system memory.