Transaction based shared data operations in a multiprocessor environment
    33.
    发明申请
    Transaction based shared data operations in a multiprocessor environment 有权
    多处理器环境中基于事务的共享数据操作

    公开(公告)号:US20060161740A1

    公开(公告)日:2006-07-20

    申请号:US11027623

    申请日:2004-12-29

    IPC分类号: G06F13/00

    摘要: The apparatus and method described herein are for handling shared memory accesses between multiple processors utilizing lock-free synchronization through transactional-execution. A transaction demarcated in software is speculatively executed. During execution invalidating remote accesses/requests to addresses loaded from and to be written to shared memory are track by a transaction buffer. If an invalidating access is encountered, the transaction is re-executed. After a pre-determined number of times re-executing the transaction, the transaction may be re-executed non-speculatively with locks/semaphores.

    摘要翻译: 本文描述的装置和方法用于通过事务执行来处理利用无锁同步的多个处理器之间的共享存储器访问。 在软件中划分的事务被推测执行。 在执行期间,无效远程访问/请求到从共享存储器加载并被写入到共享存储器的地址由事务缓冲器跟踪。 如果遇到无效访问,则重新执行该事务。 在重新执行事务的预定次数之后,可以非推测地用锁/信号量重新执行事务。

    Pipelined look-up in a content addressable memory
    34.
    发明申请
    Pipelined look-up in a content addressable memory 审中-公开
    流水线查找内容可寻址内存

    公开(公告)号:US20060143374A1

    公开(公告)日:2006-06-29

    申请号:US11027636

    申请日:2004-12-29

    IPC分类号: G06F12/00

    摘要: A pipelined look-up in a content addressable memory disclosed. In one embodiment, a content addressable memory includes a first cell and a second cell. The first cell is to compare a first bit of look-up data to a first bit of stored data. The second cell is to compare a second bit of look-up data to a second bit of stored data, and to generate a signal to disable the first cell if the second bit of look-up data does not match the second bit of stored data.

    摘要翻译: 公开了在内容可寻址存储器中的流水线查找。 在一个实施例中,内容可寻址存储器包括第一单元和第二单元。 第一个单元是将查找数据的第一位与存储数据的第一位进行比较。 第二单元是将第二位查找数据与存储数据的第二位进行比较,并且如果第二位查找数据与存储数据的第二位不匹配,则产生禁止第一单元的信号 。

    CACHE COHERENCY APPARATUS AND METHOD MINIMIZING MEMORY WRITEBACK OPERATIONS
    37.
    发明申请
    CACHE COHERENCY APPARATUS AND METHOD MINIMIZING MEMORY WRITEBACK OPERATIONS 有权
    高速缓存设备和方法最小化存储器写回操作

    公开(公告)号:US20150178206A1

    公开(公告)日:2015-06-25

    申请号:US14136131

    申请日:2013-12-20

    IPC分类号: G06F12/08

    CPC分类号: G06F12/0817 G06F12/0815

    摘要: An apparatus and method for reducing or eliminating writeback operations. For example, one embodiment of a method comprises: detecting a first operation associated with a cache line at a first requestor cache; detecting that the cache line exists in a first cache in a modified (M) state; forwarding the cache line from the first cache to the first requestor cache and storing the cache line in the first requestor cache in a second modified (M′) state; detecting a second operation associated with the cache line at a second requestor; responsively forwarding the cache line from the first requestor cache to the second requestor cache and storing the cache line in the second requestor cache in an owned (O) state if the cache line has not been modified in the first requestor cache; and setting the cache line to a shared (S) state in the first requestor cache.

    摘要翻译: 一种用于减少或消除写回操作的设备和方法。 例如,方法的一个实施例包括:在第一请求者高速缓存处检测与高速缓存行相关联的第一操作; 检测到所述高速缓存行存在于修改(M)状态的第一高速缓存中; 将所述高速缓存行从所述第一高速缓存转发到所述第一请求者高速缓存,并且以第二修改(M')状态将所述高速缓存行存储在所述第一请求程序高速缓存中; 在第二请求者处检测与所述高速缓存线相关联的第二操作; 响应地将所述高速缓存行从所述第一请求者缓存转发到所述第二请求器高速缓存,并且如果所述高速缓存行尚未在所述第一请求者高速缓存中被修改则将所述高速缓存行存储在所述第二请求程序高速缓存中; 以及将所述高速缓存行设置为所述第一请求者缓存中的共享(S)状态。

    Synchronizing Multiple Threads Efficiently
    39.
    发明申请
    Synchronizing Multiple Threads Efficiently 有权
    高效同步多线程

    公开(公告)号:US20130275995A1

    公开(公告)日:2013-10-17

    申请号:US13912777

    申请日:2013-06-07

    IPC分类号: G06F9/52

    摘要: In one embodiment, the present invention includes a method of assigning a location within a shared variable for each of multiple threads and writing a value to a corresponding location to indicate that the corresponding thread has reached a barrier. In such manner, when all the threads have reached the barrier, synchronization is established. In some embodiments, the shared variable may be stored in a cache accessible by the multiple threads. Other embodiments are described and claimed.

    摘要翻译: 在一个实施例中,本发明包括为多个线程中的每个线程分配共享变量内的位置并将值写入相应位置以指示相应线程已经达到屏障的方法。 以这种方式,当所有线程都到达障碍物时,建立同步。 在一些实施例中,共享变量可以存储在可由多个线程访问的高速缓存中。 描述和要求保护其他实施例。

    Dynamically routing data responses directly to requesting processor core
    40.
    发明授权
    Dynamically routing data responses directly to requesting processor core 有权
    将数据响应直接动态地路由到请求的处理器核心

    公开(公告)号:US08495091B2

    公开(公告)日:2013-07-23

    申请号:US13175772

    申请日:2011-07-01

    IPC分类号: G06F17/30

    CPC分类号: G06F13/4022

    摘要: Methods and apparatus relating to dynamically routing data responses directly to a requesting processor core are described. In one embodiment, data returned in response to a data request is to be directly transmitted to a requesting agent based on information stored in a route back table. Other embodiments are also disclosed.

    摘要翻译: 描述了将数据响应直接动态地路由到请求处理器核心的方法和装置。 在一个实施例中,响应于数据请求返回的数据将基于存储在路由表中的信息直接发送到请求代理。 还公开了其他实施例。