Cache spill management techniques using cache spill prediction
    1.
    发明授权
    Cache spill management techniques using cache spill prediction 失效
    缓存溢出管理技术使用缓存溢出预测

    公开(公告)号:US08407421B2

    公开(公告)日:2013-03-26

    申请号:US12639214

    申请日:2009-12-16

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0806 G06F12/12

    摘要: An apparatus and method is described herein for intelligently spilling cache lines. Usefulness of cache lines previously spilled from a source cache is learned, such that later evictions of useful cache lines from a source cache are intelligently selected for spill. Furthermore, another learning mechanism—cache spill prediction—may be implemented separately or in conjunction with usefulness prediction. The cache spill prediction is capable of learning the effectiveness of remote caches at holding spilled cache lines for the source cache. As a result, cache lines are capable of being intelligently selected for spill and intelligently distributed among remote caches based on the effectiveness of each remote cache in holding spilled cache lines for the source cache.

    摘要翻译: 这里描述了用于智能地溢出高速缓存行的装置和方法。 了解先前从源缓存溢出的高速缓存行的有用性,从而智能地选择来自源缓存的随后驱逐的溢出。 此外,另一种学习机制 - 缓存溢出预测 - 可以单独实施或结合有用性预测来实现。 高速缓存溢出预测能够学习在为源缓存保留溢出的高速缓存行时远程高速缓存的有效性。 因此,基于每个远程高速缓存在保存用于源高速缓存的溢出高速缓存行的有效性的情况下,高速缓存行能够被智能地选择为溢出并且智能地分布在远程高速缓存中。

    CACHE SPILL MANAGEMENT TECHNIQUES
    2.
    发明申请
    CACHE SPILL MANAGEMENT TECHNIQUES 失效
    缓存溢出管理技术

    公开(公告)号:US20110145501A1

    公开(公告)日:2011-06-16

    申请号:US12639214

    申请日:2009-12-16

    IPC分类号: G06F12/08 G06F12/00

    CPC分类号: G06F12/0806 G06F12/12

    摘要: An apparatus and method is described herein for intelligently spilling cache lines. Usefulness of cache lines previously spilled from a source cache is learned, such that later evictions of useful cache lines from a source cache are intelligently selected for spill. Furthermore, another learning mechanism—cache spill prediction—may be implemented separately or in conjunction with usefulness prediction. The cache spill prediction is capable of learning the effectiveness of remote caches at holding spilled cache lines for the source cache. As a result, cache lines are capable of being intelligently selected for spill and intelligently distributed among remote caches based on the effectiveness of each remote cache in holding spilled cache lines for the source cache.

    摘要翻译: 这里描述了用于智能地溢出高速缓存行的装置和方法。 了解先前从源缓存溢出的高速缓存行的有用性,从而智能地选择来自源缓存的随后驱逐的溢出。 此外,另一种学习机制 - 缓存溢出预测 - 可以单独实施或结合有用性预测来实现。 高速缓存溢出预测能够学习在为源缓存保留溢出的高速缓存行时远程高速缓存的有效性。 因此,基于每个远程高速缓存在保存用于源高速缓存的溢出高速缓存行的有效性的情况下,高速缓存行能够被智能地选择为溢出并且智能地分布在远程高速缓存中。

    Predictive early write-back of owned cache blocks in a shared memory computer system
    5.
    发明授权
    Predictive early write-back of owned cache blocks in a shared memory computer system 有权
    在共享内存计算机系统中预测所有缓存块的早期回写

    公开(公告)号:US07624236B2

    公开(公告)日:2009-11-24

    申请号:US11023882

    申请日:2004-12-27

    IPC分类号: G06F12/00 G06F13/00 G06F13/28

    摘要: A method for predicting early write back of owned cache blocks in a shared memory computer system. This invention enables the system to predict which written blocks may be more likely to be requested by another CPU and the owning CPU will write those blocks back to memory as soon as possible after updating the data in the block. If another processor is requesting the data, this can reduce the latency to get that data, reducing synchronization overhead, and increasing the throughput of parallel programs.

    摘要翻译: 一种用于预测共享存储器计算机系统中所拥有的高速缓存块的早期回写的方法。 本发明使得系统能够预测哪些写入块可能被另一个CPU更可能请求,并且拥有的CPU将在更新块中的数据之后尽快将这些块写回存储器。 如果另一个处理器正在请求数据,则可以减少获取该数据的延迟,减少同步开销,并增加并行程序的吞吐量。

    Method and apparatus for protecting TLB's VPN from soft errors
    6.
    发明授权
    Method and apparatus for protecting TLB's VPN from soft errors 失效
    保护TLB VPN免受软错误的方法和设备

    公开(公告)号:US07607048B2

    公开(公告)日:2009-10-20

    申请号:US11026633

    申请日:2004-12-30

    IPC分类号: G06F11/00

    摘要: A method and apparatus for protecting a TLB's VPN from soft errors is described. On a TLB lookup, the incoming virtual address is used to CAM the TLB VPN. In parallel with this CAM operation, parity is computed on the incoming virtual address for the possible page sizes supported by the processor. If a matching VPN is found in the TLB, its payload is read out. The encoded page size is used to select which of the set of pre-computed virtual address parity to compare with the stored parity bit in the TLB entry. This has the advantage of removing the computation of parity on the TLB VPN from the critical path of the TLB lookup. Instead it is now in the TLB fill path.

    摘要翻译: 描述了一种用于保护TLB的VPN免受软错误的方法和装置。 在TLB查找中,传入的虚拟地址用于CAM TLB VPN。 与该CAM操作并行,对于处理器支持的可能的页面大小,对传入的虚拟地址计算奇偶校验。 如果在TLB中找到匹配的VPN,则读出其有效载荷。 编码的页面大小用于选择预先计算的虚拟地址奇偶校验中的哪一个与TLB条目中存储的奇偶校验位进行比较。 这具有从TLB查找的关键路径去除TLB VPN上的奇偶校验的计算的优点。 相反,它现在在TLB填充路径中。

    Generational thread scheduler using reservations for fair scheduling
    7.
    发明授权
    Generational thread scheduler using reservations for fair scheduling 有权
    使用预留的生成线程调度器进行公平调度

    公开(公告)号:US09465670B2

    公开(公告)日:2016-10-11

    申请号:US13328365

    申请日:2011-12-16

    IPC分类号: G06F9/52 G06F9/50 G06F9/46

    CPC分类号: G06F9/52 G06F2209/5014

    摘要: Disclosed herein is a generational thread scheduler. One embodiment may be used with processor multithreading logic to execute threads of executable instructions, and a shared resource to be allocated fairly among the threads of executable instructions contending for access to the shared resource. Generational thread scheduling logic may allocate the shared resource efficiently and fairly by granting a first requesting thread access to the shared resource allocating a reservation for the shared resource to each other requesting thread of the executing threads and then blocking the first thread from re-requesting the shared resource until every other thread that has been allocated a reservation, has been granted access to the shared resource. Generation tracking state may be cleared when each requesting thread of the generation that was allocated a reservation has had their request satisfied.

    摘要翻译: 这里公开的是一代代线程调度器。 一个实施例可以与处理器多线程逻辑一起使用以执行可执行指令的线程,以及在竞争访问共享资源的可执行指令的线程之间公平分配的共享资源。 生成线程调度逻辑可以通过向共享资源授予对共享资源的预留的第一请求线程访问来对其执行线程的请求线程,然后阻止第一线程重新请求 共享资源,直到已分配了预留的每个其他线程已被授予对共享资源的访问权限。 当分配了预约的生成的每个请求线程已经满足了请求时,可以清除生成跟踪状态。

    Protocol for maintaining cache coherency in a CMP
    8.
    发明授权
    Protocol for maintaining cache coherency in a CMP 有权
    用于在CMP中维护高速缓存一致性的协议

    公开(公告)号:US08209490B2

    公开(公告)日:2012-06-26

    申请号:US10749752

    申请日:2003-12-30

    摘要: The present application is a protocol for maintaining cache coherency in a CMP. The CMP design contains multiple processor cores with each core having it own private cache. In addition, the CMP has a single on-ship shared cache. The processor cores and the shared cache may be connected together with a synchronous, unbuffered bidirectional ring interconnect. In the present protocol, a single INVALIDATEANDACKNOWLEDGE message is sent on the ring to invalidate a particular core and acknowledge a particular core.

    摘要翻译: 本申请是用于在CMP中维持高速缓存一致性的协议。 CMP设计包含多个处理器内核,每个内核都有自己的私有缓存。 此外,CMP具有单个在船共享缓存。 处理器核心和共享缓存可以与同步的,无缓冲的双向环互连连接在一起。 在本协议中,在环上发送单个INVALIDATEANDACKNOWLEDGE消息以使特定核心无效并且确认特定核心。

    METHOD AND APPARATUS FOR AFFINITY-GUIDED SPECULATIVE HELPER THREADS IN CHIP MULTIPROCESSORS
    9.
    发明申请
    METHOD AND APPARATUS FOR AFFINITY-GUIDED SPECULATIVE HELPER THREADS IN CHIP MULTIPROCESSORS 有权
    芯片多路由器中辅助引导的辅助线路的方法和装置

    公开(公告)号:US20110035555A1

    公开(公告)日:2011-02-10

    申请号:US12909774

    申请日:2010-10-21

    摘要: Apparatus, system and methods are provided for performing speculative data prefetching in a chip multiprocessor (CMP). Data is prefetched by a helper thread that runs on one core of the CMP while a main program runs concurrently on another core of the CMP. Data prefetched by the helper thread is provided to the helper core. For one embodiment, the data prefetched by the helper thread is pushed to the main core. It may or may not be provided to the helper core as well. A push of prefetched data to the main core may occur during a broadcast of the data to all cores of an affinity group. For at least one other embodiment, the data prefetched by a helper thread is provided, upon request from the main core, to the main core from the helper core's local cache.

    摘要翻译: 提供了用于在芯片多处理器(CMP)中执行推测性数据预取的装置,系统和方法。 数据由在CMP的一个核心上运行的辅助线程预取,而主程序在CMP的另一个核心上同时运行。 由辅助线程预取的数据被提供给辅助核心。 对于一个实施例,由辅助线程预取的数据被推送到主核心。 它也可以也可以不被提供给辅助核心。 在将数据广播到亲和组的所有核心的过程中,可能会将预取数据推送到主核心。 对于至少另一个实施例,根据主核心的请求,从辅助核心的本地高速缓存提供由辅助线程预取的数据到主核心。

    Method and apparatus for preventing starvation in a slotted-ring network
    10.
    发明授权
    Method and apparatus for preventing starvation in a slotted-ring network 失效
    用于防止开槽网络中的饥饿的方法和装置

    公开(公告)号:US07733898B2

    公开(公告)日:2010-06-08

    申请号:US10924819

    申请日:2004-08-25

    IPC分类号: H04L12/43 H04L1/00

    摘要: A method and apparatus for preventing starvation in a slotted-ring network. Embodiments may include a ring interconnect to transmit bits, with one of the bits being a slot reservation bit, and nodes coupled to the ring interconnect, with each node comprising a starvation detection element and a slot reservation element to reserve a slot for future use. In further embodiments, each node may also comprise a slot tracking element to track the location of the slot reserved by that node.

    摘要翻译: 一种用于防止开槽网络中的饥饿的方法和装置。 实施例可以包括用于传送比特的环形互连,其中一个比特是时隙保留比特,以及耦合到环形互连的节点,每个节点包括饥饿检测元素和时隙预留元素,以备将来使用的时隙。 在另外的实施例中,每个节点还可以包括跟踪由该节点保留的时隙的位置的时隙跟踪元件。