Prefetch with request for ownership without data
    1.
    发明授权
    Prefetch with request for ownership without data 有权
    预先获取所有权而无需数据

    公开(公告)号:US09430389B2

    公开(公告)日:2016-08-30

    申请号:US13976429

    申请日:2011-12-22

    IPC分类号: G06F3/00 G06F12/08 G06F9/30

    摘要: A method performed by a processor is described. The method includes executing an instruction. The instruction has an address as an operand. The executing of the instruction includes sending a signal to cache coherence protocol logic of the processor. In response to the signal, the cache coherence protocol logic issues a request for ownership of a cache line at the address. The cache line is not in a cache of the processor. The request for ownership also indicates that the cache line is not to be sent to the processor.

    摘要翻译: 描述由处理器执行的方法。 该方法包括执行指令。 该指令具有作为操作数的地址。 指令的执行包括向处理器的高速缓存一致性协议逻辑发送信号。 响应于该信号,高速缓存一致性协议逻辑在地址处发出对高速缓存行的所有权的请求。 高速缓存行不在处理器的高速缓存中。 所有权请求也表示高速缓存行不被发送到处理器。

    PREFETCH WITH REQUEST FOR OWNERSHIP WITHOUT DATA
    2.
    发明申请
    PREFETCH WITH REQUEST FOR OWNERSHIP WITHOUT DATA 有权
    提供无需数据的所有权

    公开(公告)号:US20140164705A1

    公开(公告)日:2014-06-12

    申请号:US13976429

    申请日:2011-12-22

    IPC分类号: G06F12/08

    摘要: A method performed by a processor is described. The method includes executing an instruction. The instruction has an address as an operand. The executing of the instruction includes sending a signal to cache coherence protocol logic of the processor. In response to the signal, the cache coherence protocol logic issues a request for ownership of a cache line at the address. The cache line is not in a cache of the processor. The request for ownership also indicates that the cache line is not to be sent to the processor.

    摘要翻译: 描述由处理器执行的方法。 该方法包括执行指令。 该指令具有作为操作数的地址。 指令的执行包括向处理器的高速缓存一致性协议逻辑发送信号。 响应于该信号,高速缓存一致性协议逻辑在地址处发出对高速缓存行的所有权的请求。 高速缓存行不在处理器的高速缓存中。 所有权请求也表示高速缓存行不被发送到处理器。

    Predictive early write-back of owned cache blocks in a shared memory computer system
    6.
    发明授权
    Predictive early write-back of owned cache blocks in a shared memory computer system 有权
    在共享内存计算机系统中预测所有缓存块的早期回写

    公开(公告)号:US07624236B2

    公开(公告)日:2009-11-24

    申请号:US11023882

    申请日:2004-12-27

    IPC分类号: G06F12/00 G06F13/00 G06F13/28

    摘要: A method for predicting early write back of owned cache blocks in a shared memory computer system. This invention enables the system to predict which written blocks may be more likely to be requested by another CPU and the owning CPU will write those blocks back to memory as soon as possible after updating the data in the block. If another processor is requesting the data, this can reduce the latency to get that data, reducing synchronization overhead, and increasing the throughput of parallel programs.

    摘要翻译: 一种用于预测共享存储器计算机系统中所拥有的高速缓存块的早期回写的方法。 本发明使得系统能够预测哪些写入块可能被另一个CPU更可能请求,并且拥有的CPU将在更新块中的数据之后尽快将这些块写回存储器。 如果另一个处理器正在请求数据,则可以减少获取该数据的延迟,减少同步开销,并增加并行程序的吞吐量。

    Method and apparatus for protecting TLB's VPN from soft errors
    7.
    发明授权
    Method and apparatus for protecting TLB's VPN from soft errors 失效
    保护TLB VPN免受软错误的方法和设备

    公开(公告)号:US07607048B2

    公开(公告)日:2009-10-20

    申请号:US11026633

    申请日:2004-12-30

    IPC分类号: G06F11/00

    摘要: A method and apparatus for protecting a TLB's VPN from soft errors is described. On a TLB lookup, the incoming virtual address is used to CAM the TLB VPN. In parallel with this CAM operation, parity is computed on the incoming virtual address for the possible page sizes supported by the processor. If a matching VPN is found in the TLB, its payload is read out. The encoded page size is used to select which of the set of pre-computed virtual address parity to compare with the stored parity bit in the TLB entry. This has the advantage of removing the computation of parity on the TLB VPN from the critical path of the TLB lookup. Instead it is now in the TLB fill path.

    摘要翻译: 描述了一种用于保护TLB的VPN免受软错误的方法和装置。 在TLB查找中,传入的虚拟地址用于CAM TLB VPN。 与该CAM操作并行,对于处理器支持的可能的页面大小,对传入的虚拟地址计算奇偶校验。 如果在TLB中找到匹配的VPN,则读出其有效载荷。 编码的页面大小用于选择预先计算的虚拟地址奇偶校验中的哪一个与TLB条目中存储的奇偶校验位进行比较。 这具有从TLB查找的关键路径去除TLB VPN上的奇偶校验的计算的优点。 相反,它现在在TLB填充路径中。

    Generational thread scheduler using reservations for fair scheduling
    8.
    发明授权
    Generational thread scheduler using reservations for fair scheduling 有权
    使用预留的生成线程调度器进行公平调度

    公开(公告)号:US09465670B2

    公开(公告)日:2016-10-11

    申请号:US13328365

    申请日:2011-12-16

    IPC分类号: G06F9/52 G06F9/50 G06F9/46

    CPC分类号: G06F9/52 G06F2209/5014

    摘要: Disclosed herein is a generational thread scheduler. One embodiment may be used with processor multithreading logic to execute threads of executable instructions, and a shared resource to be allocated fairly among the threads of executable instructions contending for access to the shared resource. Generational thread scheduling logic may allocate the shared resource efficiently and fairly by granting a first requesting thread access to the shared resource allocating a reservation for the shared resource to each other requesting thread of the executing threads and then blocking the first thread from re-requesting the shared resource until every other thread that has been allocated a reservation, has been granted access to the shared resource. Generation tracking state may be cleared when each requesting thread of the generation that was allocated a reservation has had their request satisfied.

    摘要翻译: 这里公开的是一代代线程调度器。 一个实施例可以与处理器多线程逻辑一起使用以执行可执行指令的线程,以及在竞争访问共享资源的可执行指令的线程之间公平分配的共享资源。 生成线程调度逻辑可以通过向共享资源授予对共享资源的预留的第一请求线程访问来对其执行线程的请求线程,然后阻止第一线程重新请求 共享资源,直到已分配了预留的每个其他线程已被授予对共享资源的访问权限。 当分配了预约的生成的每个请求线程已经满足了请求时,可以清除生成跟踪状态。

    Protocol for maintaining cache coherency in a CMP
    9.
    发明授权
    Protocol for maintaining cache coherency in a CMP 有权
    用于在CMP中维护高速缓存一致性的协议

    公开(公告)号:US08209490B2

    公开(公告)日:2012-06-26

    申请号:US10749752

    申请日:2003-12-30

    摘要: The present application is a protocol for maintaining cache coherency in a CMP. The CMP design contains multiple processor cores with each core having it own private cache. In addition, the CMP has a single on-ship shared cache. The processor cores and the shared cache may be connected together with a synchronous, unbuffered bidirectional ring interconnect. In the present protocol, a single INVALIDATEANDACKNOWLEDGE message is sent on the ring to invalidate a particular core and acknowledge a particular core.

    摘要翻译: 本申请是用于在CMP中维持高速缓存一致性的协议。 CMP设计包含多个处理器内核,每个内核都有自己的私有缓存。 此外,CMP具有单个在船共享缓存。 处理器核心和共享缓存可以与同步的,无缓冲的双向环互连连接在一起。 在本协议中,在环上发送单个INVALIDATEANDACKNOWLEDGE消息以使特定核心无效并且确认特定核心。

    METHOD AND APPARATUS FOR AFFINITY-GUIDED SPECULATIVE HELPER THREADS IN CHIP MULTIPROCESSORS
    10.
    发明申请
    METHOD AND APPARATUS FOR AFFINITY-GUIDED SPECULATIVE HELPER THREADS IN CHIP MULTIPROCESSORS 有权
    芯片多路由器中辅助引导的辅助线路的方法和装置

    公开(公告)号:US20110035555A1

    公开(公告)日:2011-02-10

    申请号:US12909774

    申请日:2010-10-21

    摘要: Apparatus, system and methods are provided for performing speculative data prefetching in a chip multiprocessor (CMP). Data is prefetched by a helper thread that runs on one core of the CMP while a main program runs concurrently on another core of the CMP. Data prefetched by the helper thread is provided to the helper core. For one embodiment, the data prefetched by the helper thread is pushed to the main core. It may or may not be provided to the helper core as well. A push of prefetched data to the main core may occur during a broadcast of the data to all cores of an affinity group. For at least one other embodiment, the data prefetched by a helper thread is provided, upon request from the main core, to the main core from the helper core's local cache.

    摘要翻译: 提供了用于在芯片多处理器(CMP)中执行推测性数据预取的装置,系统和方法。 数据由在CMP的一个核心上运行的辅助线程预取,而主程序在CMP的另一个核心上同时运行。 由辅助线程预取的数据被提供给辅助核心。 对于一个实施例,由辅助线程预取的数据被推送到主核心。 它也可以也可以不被提供给辅助核心。 在将数据广播到亲和组的所有核心的过程中,可能会将预取数据推送到主核心。 对于至少另一个实施例,根据主核心的请求,从辅助核心的本地高速缓存提供由辅助线程预取的数据到主核心。