摘要:
A multiprocessor data processing system requires careful management to maintain cache coherency. Conventional systems using a MESI approach sacrifice some performance with inefficient lock-acquisition and lock-retention techniques. The disclosed system provides additional cache states, indicator bits, and lock-acquisition routines to improve cache performance. The additional cache states allow cache state transition sequences to be optimized by replacing frequently-occurring and inefficient MESI code sequences with improved sequences using modified cache states.
摘要:
A multiprocessor data processing system requires careful management to maintain cache coherency. Conventional systems using a MESI approach sacrifice some performance with inefficient lock-acquisition and lock-retention techniques. The disclosed system provides additional cache states, indicator bits, and lock-acquisition routines to improve cache performance. In particular, as multiple processors compete for the same cache line, a significant amount of processor time is lost determining if another processor's cache line lock has been released and attempting to reserve that cache line while it is still owned by the other processor. The preferred embodiment provides an indicator bit with the cache store command which specifically indicates whether the store also acts as a lock-release.
摘要:
An information handling system includes a plurality of sequentially connected units including a first unit, a second unit and a third unit. Packets of information flow from the first unit directly to the second unit and then to the third unit, and each of the plurality of units provides a respective dynamic output indication indicating if that unit output a packet. The information handling system further includes a control unit that determines, utilizing all of the plurality of dynamic output indications, packet buffering capacities of the plurality of units, and guaranteed packet flows between adjacent ones of the plurality of units, if the first unit can output a packet directly to the second unit without packet loss. In response to this determination, the control unit outputs a control signal to the first unit.
摘要:
A novel cache coherency protocol provides a modified-unsolicited (Mu) cache state to indicate that a value held in a cache line has been modified (i.e., is not currently consistent with system memory), but was modified by another processing unit, not by the processing unit associated with the cache that currently contains the value in the Mu state, and that the value is held exclusive of any other horizontally adjacent caches. Because the value is exclusively held, it may be modified in that cache without the necessity of issuing a bus transaction to other horizontal caches in the memory hierarchy. The Mu state may be applied as a result of a snoop response to a read request. The read request can include a flag to indicate that the requesting cache is capable of utilizing the Mu state. Alternatively, a flag may be provided with intervention data to indicate that the requesting cache should utilize the modified-unsolicited state.
摘要:
A novel cache coherency protocol provides a modified-unsolicited (MU) cache state to indicate that a value held in a cache line has been modified (i.e., is not currently consistent with system memory), but was modified by another processing unit, not by the processing unit associated with the cache that currently contains the value in the MU state, and that the value is held exclusive of any other horizontally adjacent caches. Because the value is exclusively held, it may be modified in that cache without the necessity of issuing a bus transaction to other horizontal caches in the memory hierarchy. The MU state may be applied as a result of a snoop response to a read request. The read request can include a flag to indicate that the requesting cache is capable of utilizing the MU state. Alternatively, a flag may be provided with intervention data to indicate that the requesting cache should utilize the modified-unsolicited state.
摘要:
Dynamic migration of a cache prefetch request is performed. A prefetch candidate table maintains at least one prefetch candidate which may be executed as a prefetch request. The prefetch candidate includes one or more trigger addresses which correspond to locations in the instruction stream where the prefetch candidate is to be executed as a prefetch request. A jump history table maintains a record of target addresses of program branches which have been executed. The trigger addresses in the prefetch candidate are defined by the target addresses of recently executed program branches maintained in the jump history table. A pending prefetch table maintains a record of executed prefetch requests. When an operation such as a cache miss, cache hit, touch instruction or program branch is identified, the pending prefetch table is scanned to determine whether a prefetch request has been executed. If a prefetch request has been executed, the prefetch candidate which was used to execute that prefetch request is updated. That is, a new trigger address in the prefetch candidate is selected in order to reduce access latency.
摘要:
A technique for triggering a system bus write command with user code includes identifying a specific store-type instruction in a user instruction sequence. The specific store-type instruction is converted into a specific request-type command, which is configured to include core permission controls (that are stored in core configuration registers of a processor core by a trusted kernel) and user created data (stored in a cache memory). Slave devices are configured through register space (that is only accessible by the trusted kernel) with respective slave permission controls. The specific request-type command is then transmitted from the cache memory, via a system bus. In this case, the slave devices that receive the specific request-type command process the specific request-type command when the core permission controls are the same as the respective slave permission controls. The trusted kernel may be included in a hypervisor or an operating system.
摘要:
An apparatus and computer program product are disclosed for, in a processor, concurrently sharing a memory controller among a tracing process and non-tracing processes using a programmable variable number of shared memory write buffers. A hardware trace facility captures hardware trace data in a processor. The hardware trace facility is included within the processor. The hardware trace data is transmitted to a system memory utilizing a system bus. The system memory is included within the system. The system bus is capable of being utilized by processing units included in the processing node while the hardware trace data is being transmitted to the system bus. Part of system memory is utilized to store the trace data. The system memory is capable of being accessed by processing units in the processing node other than the hardware trace facility while part of the system memory is being utilized to store the trace data.
摘要:
A method and system are disclosed for saving soft state information, which is non-critical for executing a process in a processor, upon a receipt of a process interrupt by the processor. The soft state is transmitted to a memory associated with the processor via a memory interface. Preferably, the soft state is transmitted within the processor to the memory interface via a scan-chain pathway within the processor, which allows functional data pathways to remain unobstructed by the storage of the soft state. Thereafter, the stored soft state can be restored from memory when the process is again executed.
摘要:
An information handling system (IHS) includes a processor with a cache memory system. The processor includes a processor core with an L1 cache memory that couples to an L2 cache memory. The processor includes an arbitration mechanism that controls load and store requests to the L2 cache memory. The arbitration mechanism includes control logic that enables a load request to interrupt a store request that the L2 cache memory is currently servicing. The L2 cache memory includes dual data banks so that one bank may perform a load operation while the other bank performs a store operation. The cache system provides a single dispatch point into the data flow to the dual cache banks of the L2 cache memory.