摘要:
Embodiments of systems, apparatuses, and methods for performing a jump instruction in a computer processor are described. In some embodiments, the execution of a blend instruction causes a conditional jump to an address of a target instruction when all of bits of a writemask are zero, wherein the address of the target instruction is calculated using an instruction pointer of the instruction and the relative offset.
摘要:
An apparatus and method are described for performing efficient gather operations in a pipelined processor. For example, a processor according to one embodiment of the invention comprises: gather setup logic to execute one or more gather setup operations in anticipation of one or more gather operations, the gather setup operations to determine one or more addresses of vector data elements to be gathered by the gather operations; and gather logic to execute the one or more gather operations to gather the vector data elements using the one or more addresses determined by the gather setup operations.
摘要:
Methods of operating two or more devices in lockstep by generating requests at each device, comparing the requests, and forwarding matching requests to a servicing node are described and claimed. A redundant execution system using the methods is also described and claimed.
摘要:
A method for predicting early write back of owned cache blocks in a shared memory computer system. This invention enables the system to predict which written blocks may be more likely to be requested by another CPU and the owning CPU will write those blocks back to memory as soon as possible after updating the data in the block. If another processor is requesting the data, this can reduce the latency to get that data, reducing synchronization overhead, and increasing the throughput of parallel programs.
摘要:
A method and apparatus for protecting a TLB's VPN from soft errors is described. On a TLB lookup, the incoming virtual address is used to CAM the TLB VPN. In parallel with this CAM operation, parity is computed on the incoming virtual address for the possible page sizes supported by the processor. If a matching VPN is found in the TLB, its payload is read out. The encoded page size is used to select which of the set of pre-computed virtual address parity to compare with the stored parity bit in the TLB entry. This has the advantage of removing the computation of parity on the TLB VPN from the critical path of the TLB lookup. Instead it is now in the TLB fill path.
摘要:
Disclosed herein is a generational thread scheduler. One embodiment may be used with processor multithreading logic to execute threads of executable instructions, and a shared resource to be allocated fairly among the threads of executable instructions contending for access to the shared resource. Generational thread scheduling logic may allocate the shared resource efficiently and fairly by granting a first requesting thread access to the shared resource allocating a reservation for the shared resource to each other requesting thread of the executing threads and then blocking the first thread from re-requesting the shared resource until every other thread that has been allocated a reservation, has been granted access to the shared resource. Generation tracking state may be cleared when each requesting thread of the generation that was allocated a reservation has had their request satisfied.
摘要:
The present application is a protocol for maintaining cache coherency in a CMP. The CMP design contains multiple processor cores with each core having it own private cache. In addition, the CMP has a single on-ship shared cache. The processor cores and the shared cache may be connected together with a synchronous, unbuffered bidirectional ring interconnect. In the present protocol, a single INVALIDATEANDACKNOWLEDGE message is sent on the ring to invalidate a particular core and acknowledge a particular core.
摘要:
Apparatus, system and methods are provided for performing speculative data prefetching in a chip multiprocessor (CMP). Data is prefetched by a helper thread that runs on one core of the CMP while a main program runs concurrently on another core of the CMP. Data prefetched by the helper thread is provided to the helper core. For one embodiment, the data prefetched by the helper thread is pushed to the main core. It may or may not be provided to the helper core as well. A push of prefetched data to the main core may occur during a broadcast of the data to all cores of an affinity group. For at least one other embodiment, the data prefetched by a helper thread is provided, upon request from the main core, to the main core from the helper core's local cache.
摘要:
A method and apparatus for preventing starvation in a slotted-ring network. Embodiments may include a ring interconnect to transmit bits, with one of the bits being a slot reservation bit, and nodes coupled to the ring interconnect, with each node comprising a starvation detection element and a slot reservation element to reserve a slot for future use. In further embodiments, each node may also comprise a slot tracking element to track the location of the slot reserved by that node.
摘要:
A multi-threaded processor provides for efficient flow-control from a pool of un-executed stores in an instruction queue to a store queue. The processor also includes similar capabilities with respect to load instructions. The processor includes logic organized into a plurality of thread processing units (“TPUs”) and allocation logic that monitors each TPUs demand for entries in the store queue. Demand is determined by subtracting an adjustable threshold value from the most recently assigned store identifier value. If the difference between the most recently assigned instruction identifier for a TPU and the TPU's threshold is non-zero, then it is determined that the TPU has demand for at least one entry in the store queue. The allocation logic includes arbitration logic that determines which one of a plurality of TPUs with store queue demand should be allocated a free entry in the store queue.