CACHE COHERENCY APPARATUS AND METHOD MINIMIZING MEMORY WRITEBACK OPERATIONS
    43.
    发明申请
    CACHE COHERENCY APPARATUS AND METHOD MINIMIZING MEMORY WRITEBACK OPERATIONS 有权
    高速缓存设备和方法最小化存储器写回操作

    公开(公告)号:US20150178206A1

    公开(公告)日:2015-06-25

    申请号:US14136131

    申请日:2013-12-20

    CPC classification number: G06F12/0817 G06F12/0815

    Abstract: An apparatus and method for reducing or eliminating writeback operations. For example, one embodiment of a method comprises: detecting a first operation associated with a cache line at a first requestor cache; detecting that the cache line exists in a first cache in a modified (M) state; forwarding the cache line from the first cache to the first requestor cache and storing the cache line in the first requestor cache in a second modified (M′) state; detecting a second operation associated with the cache line at a second requestor; responsively forwarding the cache line from the first requestor cache to the second requestor cache and storing the cache line in the second requestor cache in an owned (O) state if the cache line has not been modified in the first requestor cache; and setting the cache line to a shared (S) state in the first requestor cache.

    Abstract translation: 一种用于减少或消除写回操作的设备和方法。 例如,方法的一个实施例包括:在第一请求者高速缓存处检测与高速缓存行相关联的第一操作; 检测到所述高速缓存行存在于修改(M)状态的第一高速缓存中; 将所述高速缓存行从所述第一高速缓存转发到所述第一请求者高速缓存,并且以第二修改(M')状态将所述高速缓存行存储在所述第一请求程序高速缓存中; 在第二请求者处检测与所述高速缓存线相关联的第二操作; 响应地将所述高速缓存行从所述第一请求者缓存转发到所述第二请求器高速缓存,并且如果所述高速缓存行尚未在所述第一请求者高速缓存中被修改则将所述高速缓存行存储在所述第二请求程序高速缓存中; 以及将所述高速缓存行设置为所述第一请求者缓存中的共享(S)状态。

    Synchronizing Multiple Threads Efficiently
    45.
    发明申请
    Synchronizing Multiple Threads Efficiently 有权
    高效同步多线程

    公开(公告)号:US20130275995A1

    公开(公告)日:2013-10-17

    申请号:US13912777

    申请日:2013-06-07

    Abstract: In one embodiment, the present invention includes a method of assigning a location within a shared variable for each of multiple threads and writing a value to a corresponding location to indicate that the corresponding thread has reached a barrier. In such manner, when all the threads have reached the barrier, synchronization is established. In some embodiments, the shared variable may be stored in a cache accessible by the multiple threads. Other embodiments are described and claimed.

    Abstract translation: 在一个实施例中,本发明包括为多个线程中的每个线程分配共享变量内的位置并将值写入相应位置以指示相应线程已经达到屏障的方法。 以这种方式,当所有线程都到达障碍物时,建立同步。 在一些实施例中,共享变量可以存储在可由多个线程访问的高速缓存中。 描述和要求保护其他实施例。

    Dynamically routing data responses directly to requesting processor core
    46.
    发明授权
    Dynamically routing data responses directly to requesting processor core 有权
    将数据响应直接动态地路由到请求的处理器核心

    公开(公告)号:US08495091B2

    公开(公告)日:2013-07-23

    申请号:US13175772

    申请日:2011-07-01

    CPC classification number: G06F13/4022

    Abstract: Methods and apparatus relating to dynamically routing data responses directly to a requesting processor core are described. In one embodiment, data returned in response to a data request is to be directly transmitted to a requesting agent based on information stored in a route back table. Other embodiments are also disclosed.

    Abstract translation: 描述了将数据响应直接动态地路由到请求处理器核心的方法和装置。 在一个实施例中,响应于数据请求返回的数据将基于存储在路由表中的信息直接发送到请求代理。 还公开了其他实施例。

    Synchronizing multiple threads efficiently
    47.
    发明授权
    Synchronizing multiple threads efficiently 有权
    有效地同步多个线程

    公开(公告)号:US08473963B2

    公开(公告)日:2013-06-25

    申请号:US13069684

    申请日:2011-03-23

    Abstract: In one embodiment, the present invention includes a method of assigning a location within a shared variable for each of multiple threads and writing a value to a corresponding location to indicate that the corresponding thread has reached a barrier. In such manner, when all the threads have reached the barrier, synchronization is established. In some embodiments, the shared variable may be stored in a cache accessible by the multiple threads. Other embodiments are described and claimed.

    Abstract translation: 在一个实施例中,本发明包括为多个线程中的每个线程分配共享变量内的位置并将值写入相应位置以指示相应线程已经达到屏障的方法。 以这种方式,当所有线程都到达障碍物时,建立同步。 在一些实施例中,共享变量可以存储在可由多个线程访问的高速缓存中。 描述和要求保护其他实施例。

    Fair sharing of a cache in a multi-core/multi-threaded processor by dynamically partitioning of the cache
    48.
    发明授权
    Fair sharing of a cache in a multi-core/multi-threaded processor by dynamically partitioning of the cache 有权
    通过动态分区缓存,在多核/多线程处理器中公平共享缓存

    公开(公告)号:US07996644B2

    公开(公告)日:2011-08-09

    申请号:US11026316

    申请日:2004-12-29

    CPC classification number: G06F12/084 G06F12/0864 G06F12/126

    Abstract: An apparatus and method for fairly accessing a shared cache with multiple resources, such as multiple cores, multiple threads, or both are herein described. A resource within a microprocessor sharing access to a cache is assigned a static portion of the cache and a dynamic portion. The resource is blocked from victimizing static portions assigned to other resources, yet, allowed to victimize the static portion assigned to the resource and the dynamically shared portion. If the resource does not access the cache enough times over a period of time, the static portion assigned to the resource is reassigned to the dynamically shared portion.

    Abstract translation: 这里描述了用于公平地访问具有多个资源(例如多个核心,多个线程或两者)的多个资源的共享高速缓存的装置和方法。 分配对高速缓存的访问的微处理器内的资源被分配有高速缓存的静态部分和动态部分。 该资源被阻止从分配给其他资源的静态部分受到伤害,但是允许资源分配给动态共享部分的静态部分。 如果资源在一段时间内没有足够的时间访问缓存,则分配给资源的静态部分被重新分配给动态共享部分。

    Method for page sharing in a processor with multiple threads and pre-validated caches
    50.
    发明授权
    Method for page sharing in a processor with multiple threads and pre-validated caches 有权
    具有多线程和预先验证的缓存的处理器中页面共享的方法

    公开(公告)号:US07181590B2

    公开(公告)日:2007-02-20

    申请号:US10650335

    申请日:2003-08-28

    CPC classification number: G06F12/1054 G06F12/1036

    Abstract: A method and system for allowing a multi-threaded processor to share pages across different threads in a pre-validated cache using a translation look-aside buffer is disclosed. The multi-threaded processor searches a translation look-aside buffer in an attempt to match a virtual memory address. If no matching valid virtual memory address is found, a new translation is retrieved and the translation look-aside buffer is searched for a matching physical memory address. If a matching physical memory address is found, the old translation is overwritten with a new translation. The multi-threaded processor may execute switch on event multi-threading or simultaneous multi-threading. If simultaneous multi-threading is executed, then access rights for each thread is associated with the translation.

    Abstract translation: 公开了一种允许多线程处理器使用翻译后备缓冲器在预先验证的高速缓存中的不同线程上共享页面的方法和系统。 多线程处理器搜索翻译后备缓冲区以尝试匹配虚拟内存地址。 如果没有找到匹配的有效虚拟内存地址,则检索新的翻译,并搜索匹配的物理内存地址的翻译后备缓冲区。 如果找到匹配的物理内存地址,则使用新的翻译覆盖旧的翻译。 多线程处理器可以执行切换事件多线程或同时多线程。 如果同时执行多线程,则每个线程的访问权限与翻译相关联。

Patent Agency Ranking