Method and apparatus for results speculation under run-ahead execution
    11.
    发明授权
    Method and apparatus for results speculation under run-ahead execution 有权
    预测执行结果投机的方法和装置

    公开(公告)号:US07496732B2

    公开(公告)日:2009-02-24

    申请号:US10739686

    申请日:2003-12-17

    Abstract: A method and apparatus for using result-speculative data under run-ahead speculative execution is disclosed. In one embodiment, the uncommitted target data from instructions being run-ahead executed may be saved into an advance data table. This advance data table may be indexed by the lines in the instruction buffer containing the instructions for run-ahead execution. When the instructions are re-executed subsequent to the run-ahead execution, valid target data may be retrieved from the advance data table and supplied as part of a zero-clock bypass to support parallel re-execution. This may achieve parallel execution of dependent instructions. In other embodiments, the advance data table may be content-addressable-memory searchable on target registers and supply target data to general speculative execution.

    Abstract translation: 公开了一种在预先推测执行下使用结果推测数据的方法和装置。 在一个实施例中,来自正在执行的预定指令的未提交的目标数据可以被保存到提前数据表中。 该提前数据表可以由包含用于预先执行的指令的指令缓冲器中的行进行索引。 当在超前执行之后重新执行指令时,可以从提前数据表检索有效的目标数据,并作为零时钟旁路的一部分提供以支持并行重新执行。 这可以实现依赖指令的并行执行。 在其他实施例中,提前数据表可以是内容寻址存储器,可在目标寄存器上搜索,并将目标数据提供给一般推测执行。

    High instruction fetch bandwidth in multithread processor using temporary instruction cache to deliver portion of cache line in subsequent clock cycle
    13.
    发明授权
    High instruction fetch bandwidth in multithread processor using temporary instruction cache to deliver portion of cache line in subsequent clock cycle 有权
    多线程处理器中的高指令提取带宽使用临时指令高速缓存在随后的时钟周期内传送部分高速缓存行

    公开(公告)号:US06898694B2

    公开(公告)日:2005-05-24

    申请号:US09896346

    申请日:2001-06-28

    CPC classification number: G06F9/3806 G06F9/3802 G06F9/3851

    Abstract: The present invention provides a mechanism for supporting high bandwidth instruction fetching in a multi-threaded processor. A multi-threaded processor includes an instruction cache (I-cache) and a temporary instruction cache (TIC). In response to an instruction pointer (IP) of a first thread hitting in the I-cache, a first block of instructions for the thread is provided to an instruction buffer and a second block of instructions for the thread are provided to the TIC. On a subsequent clock interval, the second block of instructions is provided to the instruction buffer, and first and second blocks of instructions from a second thread are loaded into a second instruction buffer and the TIC, respectively.

    Abstract translation: 本发明提供一种用于在多线程处理器中支持高带宽指令提取的机制。 多线程处理器包括指令高速缓存(I-cache)和临时指令高速缓存(TIC)。 响应于在I缓存中击中的第一线程的指令指针(IP),将线程的第一指令块提供给指令缓冲器,并且向TIC提供用于线程的第二指令块。 在随后的时钟间隔中,第二指令块被提供给指令缓冲器,并且来自第二线程的第一和第二指令块分别被加载到第二指令缓冲器和TIC中。

    Virtual device sparing
    15.
    发明授权
    Virtual device sparing 有权
    虚拟设备备用

    公开(公告)号:US09201748B2

    公开(公告)日:2015-12-01

    申请号:US13996717

    申请日:2012-03-30

    Abstract: Systems and techniques for virtual device sharing. A failure of one of a plurality of memory devices corresponding to a first rank in a memory system is detected. The memory system has a plurality of ranks, each rank having a plurality of memory devices used to store a cache line. A portion of the cache line corresponding to the failed memory device is stored in a memory device in a second rank in the memory system and the remaining portion of the cache line in the first rank of the memory system.

    Abstract translation: 用于虚拟设备共享的系统和技术。 检测到与存储器系统中的第一等级对应的多个存储器件中的一个的故障。 存储器系统具有多个等级,每个等级具有用于存储高速缓存行的多个存储器件。 对应于故障存储器件的高速缓存线的一部分被存储在存储器系统中的第二等级的存储器件中,并且存储器系统的第一级中的高速缓存行的剩余部分被存储。

    Transaction based shared data operations in a multiprocessor environment
    16.
    发明授权
    Transaction based shared data operations in a multiprocessor environment 有权
    多处理器环境中基于事务的共享数据操作

    公开(公告)号:US08458412B2

    公开(公告)日:2013-06-04

    申请号:US13168171

    申请日:2011-06-24

    CPC classification number: G06F9/528 G06F9/3834 G06F9/544

    Abstract: The apparatus and method described herein are for handling shared memory accesses between multiple processors utilizing lock-free synchronization through transactional-execution. A transaction demarcated in software is speculatively executed. During execution invalidating remote accesses/requests to addresses loaded from and to be written to shared memory are tracked by a transaction buffer. If an invalidating access is encountered, the transaction is re-executed. After a pre-determined number of times re-executing the transaction, the transaction may be re-executed non-speculatively with locks/semaphores.

    Abstract translation: 本文描述的装置和方法用于通过事务执行来处理利用无锁同步的多个处理器之间的共享存储器访问。 在软件中划分的事务被推测执行。 在执行期间,无效远程访问/请求到从共享存储器加载并被写入到共享存储器的地址由事务缓冲器跟踪。 如果遇到无效访问,则重新执行该事务。 在重新执行事务的预定次数之后,可以非推测地用锁/信号量重新执行事务。

    Transaction based shared data operations in a multiprocessor environment
    17.
    发明授权
    Transaction based shared data operations in a multiprocessor environment 有权
    多处理器环境中基于事务的共享数据操作

    公开(公告)号:US08176266B2

    公开(公告)日:2012-05-08

    申请号:US12943314

    申请日:2010-11-10

    CPC classification number: G06F9/528 G06F9/3834 G06F9/544

    Abstract: The apparatus and method described herein are for handling shared memory accesses between multiple processors utilizing lock-free synchronization through transactional-execution. A transaction demarcated in software is speculatively executed. During execution invalidating remote accesses/requests to addresses loaded from and to be written to shared memory are tracked by a transaction buffer. If an invalidating access is encountered, the transaction is re-executed. After a pre-determined number of times re-executing the transaction, the transaction may be re-executed non-speculatively with locks/semaphores.

    Abstract translation: 本文描述的装置和方法用于通过事务执行来处理利用无锁同步的多个处理器之间的共享存储器访问。 在软件中划分的事务被推测执行。 在执行期间,无效远程访问/请求到从共享存储器加载并被写入到共享存储器的地址由事务缓冲器跟踪。 如果遇到无效访问,则重新执行该事务。 在重新执行事务的预定次数之后,可以非推测地用锁/信号量重新执行事务。

    Method for converting pipeline stalls caused by instructions with long latency memory accesses to pipeline flushes in a multithreaded processor
    18.
    发明授权
    Method for converting pipeline stalls caused by instructions with long latency memory accesses to pipeline flushes in a multithreaded processor 有权
    用于在多线程处理器中对流水线冲洗进行长延迟存储器访问的指令转换流水线停顿的方法

    公开(公告)号:US07401211B2

    公开(公告)日:2008-07-15

    申请号:US09751762

    申请日:2000-12-29

    CPC classification number: G06F9/3851 G06F9/3867

    Abstract: In a multi-threaded processor, a long latency data dependent thread is flushed from the execution pipelines. Once the stalled thread is flushed, the non-stalling threads in the pipeline can continue their execution. Several resources are used to reduce this unwanted impact of stalls on the non-stalling threads. Also, these resources ensure that the earlier stalled thread, now flushed, is re-executed when the data dependency is resolved.

    Abstract translation: 在多线程处理器中,从执行流水线刷新长时间数据相关的线程。 一旦停止的线程被刷新,流水线中的非停顿线程可以继续执行。 使用了几种资源来减少失速对非拖延线程的不必要的影响。 此外,这些资源确保当数据依赖关系解决时,重新执行早期已停止的线程,现在已刷新。

    Method and apparatus for instruction pointer storage element configuration in a simultaneous multithreaded processor
    19.
    发明授权
    Method and apparatus for instruction pointer storage element configuration in a simultaneous multithreaded processor 有权
    同时多线程处理器中指令指针存储元件配置的方法和装置

    公开(公告)号:US07149880B2

    公开(公告)日:2006-12-12

    申请号:US09753764

    申请日:2000-12-29

    CPC classification number: G06F9/3851 G06F9/3867

    Abstract: A system and method for a simultaneous multithreaded processor that reduces the number of hardware components necessary as well as the complexity of design over current systems is disclosed. As opposed to requiring individual storage elements for saving instruction pointer information for each re-steer logic component within a processor pipeline, the present invention allows for instruction pointer information of an inactive thread to be stored in a single, ‘inactive thread’ storage element until the thread becomes active again.

    Abstract translation: 公开了一种用于同时多线程处理器的系统和方法,其减少了所需的硬件组件的数量以及当前系统上的设计的复杂性。 与在处理器流水线内为每个重新转向逻辑组件需要单独的存储元件来保存指令指针信息相反,本发明允许将一个非活动线程的指令指针信息存储在一个“非线程线程”存储元件中,直到 线程再次变为活动状态。

    Fair sharing of a cache in a multi-core/multi-threaded processor by dynamically partitioning of the cache
    20.
    发明申请
    Fair sharing of a cache in a multi-core/multi-threaded processor by dynamically partitioning of the cache 有权
    通过动态分区缓存,在多核/多线程处理器中公平共享缓存

    公开(公告)号:US20060143390A1

    公开(公告)日:2006-06-29

    申请号:US11026316

    申请日:2004-12-29

    CPC classification number: G06F12/084 G06F12/0864 G06F12/126

    Abstract: An apparatus and method for fairly accessing a shared cache with multiple resources, such as multiple cores, multiple threads, or both are herein described. A resource within a microprocessor sharing access to a cache is assigned a static portion of the cache and a dynamic portion. The resource is blocked from victimizing static portions assigned to other resources, yet, allowed to victimize the static portion assigned to the resource and the dynamically shared portion. If the resource does not access the cache enough times over a period of time, the static portion assigned to the resource is reassigned to the dynamically shared portion.

    Abstract translation: 这里描述了用于公平地访问具有多个资源(例如多个核心,多个线程或两者)的多个资源的共享高速缓存的装置和方法。 分配对高速缓存的访问的微处理器内的资源被分配有高速缓存的静态部分和动态部分。 该资源被阻止从分配给其他资源的静态部分受到伤害,但是允许资源分配给动态共享部分的静态部分。 如果资源在一段时间内没有足够的时间访问缓存,则分配给资源的静态部分被重新分配给动态共享部分。

Patent Agency Ranking