Cache line duplication in response to a way prediction conflict
    1.
    发明授权
    Cache line duplication in response to a way prediction conflict 有权
    缓存线重复响应方式预测冲突

    公开(公告)号:US07979640B2

    公开(公告)日:2011-07-12

    申请号:US12181266

    申请日:2008-07-28

    IPC分类号: G06F12/08

    摘要: Embodiments of the present invention provide a system that handles way mispredictions in a multi-way cache. The system starts by receiving requests to access cache lines in the multi-way cache. For each request, the system makes a prediction of a way in which the cache line resides based on a corresponding entry in the way prediction table. The system then checks for the presence of the cache line in the predicted way. Upon determining that the cache line is not present in the predicted way, but is present in a different way, and hence the way was mispredicted, the system increments a corresponding record in a conflict detection table. Upon detecting that a record in the conflict detection table indicates that a number of mispredictions equals a predetermined value, the system copies the corresponding cache line from the way where the cache line actually resides into the predicted way.

    摘要翻译: 本发明的实施例提供了一种在多路缓存中处理方式错误预测的系统。 系统通过接收访问多路缓存中的高速缓存行的请求来启动。 对于每个请求,系统基于方式预测表中的相应条目来预测高速缓存行驻留的方式。 然后,系统以预测的方式检查高速缓存行的存在。 在确定高速缓存行不以预测的方式存在但是以不同的方式存在,并且因此错误地预测方式时,系统在冲突检测表中增加对应的记录。 当检测到冲突检测表中的记录指示许多误预测值等于预定值时,系统将高速缓存行实际驻留的方式的相应高速缓存行复制到预测的方式。

    CACHE LINE DUPLICATION IN RESPONSE TO A WAY PREDICTION CONFLICT
    2.
    发明申请
    CACHE LINE DUPLICATION IN RESPONSE TO A WAY PREDICTION CONFLICT 有权
    响应方式预测冲突的缓存行重复

    公开(公告)号:US20100023701A1

    公开(公告)日:2010-01-28

    申请号:US12181266

    申请日:2008-07-28

    IPC分类号: G06F12/08

    摘要: Embodiments of the present invention provide a system that handles way mispredictions in a multi-way cache. The system starts by receiving requests to access cache lines in the multi-way cache. For each request, the system makes a prediction of a way in which the cache line resides based on a corresponding entry in the way prediction table. The system then checks for the presence of the cache line in the predicted way. Upon determining that the cache line is not present in the predicted way, but is present in a different way, and hence the way was mispredicted, the system increments a corresponding record in a conflict detection table. Upon detecting that a record in the conflict detection table indicates that a number of mispredictions equals a predetermined value, the system copies the corresponding cache line from the way where the cache line actually resides into the predicted way.

    摘要翻译: 本发明的实施例提供了一种在多路缓存中处理方式错误预测的系统。 系统通过接收访问多路缓存中的高速缓存行的请求来启动。 对于每个请求,系统基于方式预测表中的相应条目来预测高速缓存行驻留的方式。 然后,系统以预测的方式检查高速缓存行的存在。 在确定高速缓存行不以预测的方式存在但是以不同的方式存在,并且因此错误地预测方式时,系统在冲突检测表中增加对应的记录。 当检测到冲突检测表中的记录指示许多误预测值等于预定值时,系统将高速缓存行实际驻留的方式的相应高速缓存行复制到预测的方式。

    Using address and non-address information for improved index generation for cache memories
    3.
    发明授权
    Using address and non-address information for improved index generation for cache memories 有权
    使用地址和非地址信息来改进高速缓存存储器的索引生成

    公开(公告)号:US08151084B2

    公开(公告)日:2012-04-03

    申请号:US12018407

    申请日:2008-01-23

    IPC分类号: G06F12/10

    CPC分类号: G06F12/0864 G06F2212/6082

    摘要: Embodiments of the present invention provide a system that generates an index for a cache memory. The system starts by receiving a request to access the cache memory, wherein the request includes address information. The system then obtains non-address information associated with the request. Next, the system generates the index using the address information and the non-address information. The system then uses the index to fulfill access the cache memory.

    摘要翻译: 本发明的实施例提供一种生成高速缓冲存储器的索引的系统。 系统通过接收访问高速缓冲存储器的请求开始,其中请求包括地址信息。 然后系统获得与请求相关联的非地址信息。 接下来,系统使用地址信息和非地址信息来生成索引。 然后,系统使用索引来实现对高速缓存的访问。

    Reducing pipeline restart penalty
    4.
    发明授权
    Reducing pipeline restart penalty 有权
    减少管道重新开始罚球

    公开(公告)号:US09086889B2

    公开(公告)日:2015-07-21

    申请号:US12768641

    申请日:2010-04-27

    IPC分类号: G06F9/38 G06F12/08

    摘要: Techniques are disclosed relating to reducing the latency of restarting a pipeline in a processor that implements scouting. In one embodiment, the processor may reduce pipeline restart latency using two instruction fetch units that are configured to fetch and re-fetch instructions in parallel with one another. In some embodiments, the processor may reduce pipeline restart latency by initiating re-fetching instructions in response to determining that a commit operation is to be attempted with respect to one or more deferred instructions. In other embodiments, the processor may reduce pipeline restart latency by initiating re-fetching instructions in response to receiving an indication that a request for a set of data has been received by a cache, where the indication is sent by the cache before determining whether the data is present in the cache or not.

    摘要翻译: 公开了关于减少在实现侦察的处理器中重新启动管道的延迟的技术。 在一个实施例中,处理器可以使用配置为彼此并行地获取和重新获取指令的两个指令获取单元来减少流水线重新启动等待时间。 在一些实施例中,响应于确定将针对一个或多个延迟指令尝试提交操作,处理器可以通过启动重新获取指令来减少流水线重新启动等待时间。 在其他实施例中,处理器可以通过响应于接收到对高速缓存已经接收到对一组数据的请求的指示,通过发起重新获取指令来减少流水线重新启动等待时间,其中在由缓存发送指示之前, 数据存在于缓存中。

    INDEX GENERATION FOR CACHE MEMORIES
    5.
    发明申请
    INDEX GENERATION FOR CACHE MEMORIES 有权
    高速缓存记录的索引生成

    公开(公告)号:US20120166756A1

    公开(公告)日:2012-06-28

    申请号:US13402796

    申请日:2012-02-22

    IPC分类号: G06F12/08

    CPC分类号: G06F12/0864 G06F2212/6082

    摘要: Embodiments of the present invention provide a system that generates an index for a cache memory. The system starts by receiving a request to access the cache memory, wherein the request includes address information. The system then obtains non-address information associated with the request. Next, the system generates the index using the address information and the non-address information. The system then uses the index to fulfill access the cache memory.

    摘要翻译: 本发明的实施例提供一种生成高速缓冲存储器的索引的系统。 系统通过接收访问高速缓冲存储器的请求开始,其中请求包括地址信息。 然后系统获得与请求相关联的非地址信息。 接下来,系统使用地址信息和非地址信息来生成索引。 然后,系统使用索引来实现对高速缓存的访问。

    AGGRESSIVE STORE MERGING IN A PROCESSOR THAT SUPPORTS CHECKPOINTING
    6.
    发明申请
    AGGRESSIVE STORE MERGING IN A PROCESSOR THAT SUPPORTS CHECKPOINTING 有权
    在支持检查的处理器中进行积极的存储合并

    公开(公告)号:US20090300338A1

    公开(公告)日:2009-12-03

    申请号:US12128332

    申请日:2008-05-28

    IPC分类号: G06F9/30

    摘要: Embodiments of the present invention provide a processor that merges stores in an N-entry first-in-first-out (FIFO) store queue. In these embodiments, the processor starts by executing instructions before a checkpoint is generated. When executing instructions before the checkpoint is generated, the processor is configured to perform limited or no merging of stores into existing entries in the store queue. Then, upon detecting a predetermined condition, the processor is configured to generate a checkpoint. After generating the checkpoint, the processor is configured to continue to execute instructions. When executing instructions after the checkpoint is generated, the processor is configured to freely merge subsequent stores into post-checkpoint entries in the store queue.

    摘要翻译: 本发明的实施例提供了一种处理器,其将存储结合在N入口先进先出(FIFO)存储队列中。 在这些实施例中,处理器通过在生成检查点之前执行指令来开始。 在生成检查点之前执行指令时,处理器被配置为对存储队列中的现有条目执行有限或不合并存储。 然后,在检测到预定条件时,处理器被配置为生成检查点。 生成检查点后,处理器配置为继续执行指令。 在检查点生成后执行指令时,处理器被配置为将后续存储自由合并到存储队列中的后检查点条目中。

    SEMI-ORDERED TRANSACTIONS
    7.
    发明申请
    SEMI-ORDERED TRANSACTIONS 审中-公开
    半订单交易

    公开(公告)号:US20090187906A1

    公开(公告)日:2009-07-23

    申请号:US12018417

    申请日:2008-01-23

    IPC分类号: G06F9/46

    摘要: Embodiments of the present invention provide a system that facilitates transactional execution in a processor. The system starts by executing program code for a thread in a processor. Upon detecting a predetermined indicator, the system starts a transaction for a section of the program code for the thread. When starting the transaction, the system executes a checkpoint instruction. If the checkpoint instruction is a WEAK_CHECKPOINT instruction, the system executes a semi-ordered transaction. During the semi-ordered transaction, the system preserves code atomicity but not memory atomicity. Otherwise, the system executes a regular transaction. During the regular transaction, the system preserves both code atomicity and memory atomicity.

    摘要翻译: 本发明的实施例提供一种促进处理器中的事务执行的系统。 系统通过为处理器中的线程执行程序代码来启动。 在检测到预定指示符时,系统针对线程的程序代码的一部分开始事务。 开始事务时,系统执行检查点指令。 如果检查点指令是WEAK_CHECKPOINT指令,则系统执行半订单事务。 在半订单事务期间,系统保留代码原子性,但不保留内存原子性。 否则,系统将执行常规事务。 在常规事务期间,系统保留代码原子性和内存原子性。

    Aggressive store merging in a processor that supports checkpointing
    8.
    发明授权
    Aggressive store merging in a processor that supports checkpointing 有权
    积极的商店合并在支持检查点的处理器中

    公开(公告)号:US07934080B2

    公开(公告)日:2011-04-26

    申请号:US12128332

    申请日:2008-05-28

    IPC分类号: G06F9/312

    摘要: Embodiments of the present invention provide a processor that merges stores in an N-entry first-in-first-out (FIFO) store queue. In these embodiments, the processor starts by executing instructions before a checkpoint is generated. When executing instructions before the checkpoint is generated, the processor is configured to perform limited or no merging of stores into existing entries in the store queue. Then, upon detecting a predetermined condition, the processor is configured to generate a checkpoint. After generating the checkpoint, the processor is configured to continue to execute instructions. When executing instructions after the checkpoint is generated, the processor is configured to freely merge subsequent stores into post-checkpoint entries in the store queue.

    摘要翻译: 本发明的实施例提供了一种处理器,其将存储结合在N入口先进先出(FIFO)存储队列中。 在这些实施例中,处理器通过在生成检查点之前执行指令来开始。 在生成检查点之前执行指令时,处理器被配置为对存储队列中的现有条目执行有限或不合并存储。 然后,在检测到预定条件时,处理器被配置为产生检查点。 生成检查点后,处理器配置为继续执行指令。 在检查点生成后执行指令时,处理器被配置为将后续存储自由合并到存储队列中的后检查点条目中。

    INDEX GENERATION FOR CACHE MEMORIES
    9.
    发明申请
    INDEX GENERATION FOR CACHE MEMORIES 有权
    高速缓存记录的索引生成

    公开(公告)号:US20090187727A1

    公开(公告)日:2009-07-23

    申请号:US12018407

    申请日:2008-01-23

    IPC分类号: G06F12/08 G06F12/10

    CPC分类号: G06F12/0864 G06F2212/6082

    摘要: Embodiments of the present invention provide a system that generates an index for a cache memory. The system starts by receiving a request to access the cache memory, wherein the request includes address information. The system then obtains non-address information associated with the request. Next, the system generates the index using the address information and the non-address information. The system then uses the index to fulfill access the cache memory.

    摘要翻译: 本发明的实施例提供一种生成高速缓冲存储器的索引的系统。 系统通过接收访问高速缓冲存储器的请求开始,其中请求包括地址信息。 然后系统获得与请求相关联的非地址信息。 接下来,系统使用地址信息和非地址信息来生成索引。 然后,系统使用索引来实现对高速缓存的访问。

    Store queue having restricted and unrestricted entries
    10.
    发明授权
    Store queue having restricted and unrestricted entries 有权
    存储队列具有受限和不受限制的条目

    公开(公告)号:US09146744B2

    公开(公告)日:2015-09-29

    申请号:US12116009

    申请日:2008-05-06

    摘要: Embodiments of the present invention provide a system which executes a load instruction or a store instruction. During operation the system receives a load instruction. The system then determines if an unrestricted entry or a restricted entry in a store queue contains data that satisfies the load instruction. If not, the system retrieves data for the load instruction from a cache. If so, the system conditionally forwards data from the unrestricted entry or the restricted entry by: (1) forwarding data from an unrestricted entry that contains the youngest store that satisfies the load instruction when any number of unrestricted or restricted entries contain data that satisfies the load instruction; (2) forwarding data from an unrestricted entry when only one restricted entry and no unrestricted entries contain data that satisfies the load instruction; and (3) deferring the load instruction by placing the load instruction in a deferred queue when two or more restricted entries and no unrestricted entries contain data that satisfies the load instruction.

    摘要翻译: 本发明的实施例提供一种执行加载指令或存储指令的系统。 在运行过程中,系统接收到一个加载指令。 然后,系统确定存储队列中的无限制条目或限制条目是否包含满足加载指令的数据。 如果没有,系统将从缓存中检索加载指令的数据。 如果是这样,系统通过以下方式有条件地转发来自非限制条目或限制条目的数据:(1)当任何数量的无限制或限制条目包含满足条件的数据时,从包含满足加载指令的最小存储的无限制条目转发数据 加载指令; (2)当只有一个限制条目和不限制条目包含满足加载指令的数据时,从非限制条目转发数据; 和(3)通过在两个或多个限制条目和不受限制的条目包含满足加载指令的数据的情况下将加载指令放置在延迟队列中来推迟加载指令。