Method and apparatus for cache line write back operation
    13.
    发明授权
    Method and apparatus for cache line write back operation 有权
    高速缓存行回写操作的方法和装置

    公开(公告)号:US09471494B2

    公开(公告)日:2016-10-18

    申请号:US14137432

    申请日:2013-12-20

    Abstract: An apparatus and method are described for performing a cache line write back operation. For example, one embodiment of a method comprises: initiating a cache line write back operation directed to a particular linear address; determining if a dirty cache line identified by the linear address exists at any cache of a cache hierarchy comprised of a plurality of cache levels; writing back the dirty cache line to memory if the dirty cache line exists in one of the caches; and responsively maintaining or placing the dirty cache line in an exclusive state in at least a first cache of the hierarchy.

    Abstract translation: 描述了用于执行高速缓存行回写操作的装置和方法。 例如,方法的一个实施例包括:发起针对特定线性地址的高速缓存行回写操作; 确定由线性地址识别的脏高速缓存行是否存在于由多个高速缓存级别组成的高速缓存层级的任何高速缓存上; 如果脏缓存行存在于其中一个缓存中,则将脏缓存行写回内存; 以及响应地将所述脏高速缓存行维持或置于所述层次结构的至少第一高速缓存中的排他状态。

    Cache coherency apparatus and method minimizing memory writeback operations
    14.
    发明授权
    Cache coherency apparatus and method minimizing memory writeback operations 有权
    缓存一致性设备和最小化内存回写操作的方法

    公开(公告)号:US09436605B2

    公开(公告)日:2016-09-06

    申请号:US14136131

    申请日:2013-12-20

    CPC classification number: G06F12/0817 G06F12/0815

    Abstract: An apparatus and method for reducing or eliminating writeback operations. For example, one embodiment of a method comprises: detecting a first operation associated with a cache line at a first requestor cache; detecting that the cache line exists in a first cache in a modified (M) state; forwarding the cache line from the first cache to the first requestor cache and storing the cache line in the first requestor cache in a second modified (M′) state; detecting a second operation associated with the cache line at a second requestor; responsively forwarding the cache line from the first requestor cache to the second requestor cache and storing the cache line in the second requestor cache in an owned (O) state if the cache line has not been modified in the first requestor cache; and setting the cache line to a shared (S) state in the first requestor cache.

    Abstract translation: 一种用于减少或消除写回操作的设备和方法。 例如,方法的一个实施例包括:在第一请求者高速缓存处检测与高速缓存行相关联的第一操作; 检测到所述高速缓存行存在于修改(M)状态的第一高速缓存中; 将所述高速缓存行从所述第一高速缓存转发到所述第一请求者高速缓存,并且以第二修改(M')状态将所述高速缓存行存储在所述第一请求程序高速缓存中; 在第二请求者处检测与所述高速缓存线相关联的第二操作; 响应地将所述高速缓存行从所述第一请求者缓存转发到所述第二请求器高速缓存,并且如果所述高速缓存行尚未在所述第一请求者高速缓存中被修改则将所述高速缓存行存储在所述第二请求程序高速缓存中; 以及将所述高速缓存行设置为所述第一请求者缓存中的共享(S)状态。

    Method, apparatus and system for handling cache misses in a processor
    15.
    发明授权
    Method, apparatus and system for handling cache misses in a processor 有权
    用于处理处理器中的高速缓存未命中的方法,装置和系统

    公开(公告)号:US09405687B2

    公开(公告)日:2016-08-02

    申请号:US14070864

    申请日:2013-11-04

    Abstract: In an embodiment, a processor includes one or more cores, and a distributed caching home agent (including portions associated with each core). Each portion includes a cache controller to receive a read request for data and, responsive to the data not being present in a cache memory associated with the cache controller, to issue a memory request to a memory controller to request the data in parallel with communication of the memory request to a home agent, where the home agent is to receive the memory request from the cache controller and to reserve an entry for the memory request. Other embodiments are described and claimed.

    Abstract translation: 在一个实施例中,处理器包括一个或多个核心和分布式缓存归属代理(包括与每个核心相关联的部分)。 每个部分包括高速缓存控制器,用于接收对数据的读取请求,并且响应于不存在于与高速缓存控制器相关联的高速缓冲存储器中的数据,向存储器控制器发出存储器请求以与 对归属代理的存储器请求,其中归属代理将从高速缓存控制器接收存储器请求并且为存储器请求保留条目。 描述和要求保护其他实施例。

    SCALABLY MECHANISM TO IMPLEMENT AN INSTRUCTION THAT MONITORS FOR WRITES TO AN ADDRESS
    16.
    发明申请
    SCALABLY MECHANISM TO IMPLEMENT AN INSTRUCTION THAT MONITORS FOR WRITES TO AN ADDRESS 审中-公开
    规范机制,以实施向地址写入的监视器的指令

    公开(公告)号:US20150095580A1

    公开(公告)日:2015-04-02

    申请号:US14040375

    申请日:2013-09-27

    Abstract: A processor includes a cache-side address monitor unit corresponding to a first cache portion of a distributed cache that has a total number of cache-side address monitor storage locations less than a total number of logical processors of the processor. Each cache-side address monitor storage location is to store an address to be monitored. A core-side address monitor unit corresponds to a first core and has a same number of core-side address monitor storage locations as a number of logical processors of the first core. Each core-side address monitor storage location is to store an address, and a monitor state for a different corresponding logical processor of the first core. A cache-side address monitor storage overflow unit corresponds to the first cache portion, and is to enforce an address monitor storage overflow policy when no unused cache-side address monitor storage location is available to store an address to be monitored.

    Abstract translation: 处理器包括对应于分布式高速缓存的第一高速缓存部分的高速缓存器侧地址监视器单元,其具有小于处理器的逻辑处理器总数的高速缓存器侧地址监视器存储位置的总数。 每个缓存侧地址监视器存储位置是存储要监视的地址。 核心侧地址监视器单元对应于第一核心,并且具有与第一核心的多个逻辑处理器相同数量的核心侧地址监视器存储位置。 每个核心侧地址监视器存储位置用于存储第一核心的不同对应逻辑处理器的地址和监视状态。 高速缓存侧地址监视器存储溢出单元对应于第一高速缓存部分,并且当没有未使用的高速缓存侧地址监视器存储位置可用于存储要监视的地址时,强制执行地址监视器存储溢出策略。

Patent Agency Ranking