Method and apparatus for updating global branch history information
    1.
    发明申请
    Method and apparatus for updating global branch history information 失效
    用于更新全局分支历史信息的方法和装置

    公开(公告)号:US20060149951A1

    公开(公告)日:2006-07-06

    申请号:US11013148

    申请日:2004-12-15

    IPC分类号: G06F9/44

    CPC分类号: G06F9/3806 G06F9/3848

    摘要: A method and apparatus for updating global branch history information are disclosed. A dynamic branch predictor within a data processing system includes a global branch history (GBH) buffer and a branch history table. The GBH buffer contains GBH information of a group of the most recent branch instructions. The branch history table includes multiple entries, each entry is associated with one or more branch instructions. The GBH information from the GBH buffer can be used to index into the branch history table to obtain a branch prediction signal. In response to a fetch group of instructions, a fixed number of GBH bits is shifted into the GBH buffer. The number of GBH bits is the same regardless of the number of branch instructions within the fetch group of instructions. In addition, there is a unique bit pattern associated with the case of no taken branch in the fetch group, regardless of the number of not-taken branches of even if there are any branches in the fetch group.

    摘要翻译: 公开了一种用于更新全局分支历史信息的方法和装置。 数据处理系统中的动态分支预测器包括全局分支历史(GBH)缓冲区和分支历史表。 GBH缓冲区包含一组最新分支指令的GBH信息。 分支历史表包括多个条目,每个条目与一个或多个分支指令相关联。 来自GBH缓冲器的GBH信息可以用于索引到分支历史表中以获得分支预测信号。 响应于取指令组,固定数量的GBH位被移入GBH缓冲器。 无论读取指令组中的分支指令数如何,GBH位数都是相同的。 另外,即使在取出组中有任何分支,也不管抽取分支的数目如何,与获取组中没有分支的情况相关联的唯一位模式。

    MULTIPLE PAGE SIZE ADDRESS TRANSLATION INCORPORATING PAGE SIZE PREDICTION
    2.
    发明申请
    MULTIPLE PAGE SIZE ADDRESS TRANSLATION INCORPORATING PAGE SIZE PREDICTION 有权
    多页尺寸地址翻译包含页面大小预测

    公开(公告)号:US20070186074A1

    公开(公告)日:2007-08-09

    申请号:US11733520

    申请日:2007-04-10

    IPC分类号: G06F12/00

    CPC分类号: G06F12/1036 G06F2212/652

    摘要: Page size prediction is used to predict a page size for a page of memory being accessed by a memory access instruction such that the predicted page size can be used to access an address translation data structure. By doing so, an address translation data structure may support multiple page sizes in an efficient manner and with little additional circuitry disposed in the critical path for address translation, thereby increasing performance.

    摘要翻译: 页面大小预测用于预测由存储器访问指令访问的存储器页面的页面大小,使得可以使用预测的页面大小来访问地址转换数据结构。 通过这样做,地址转换数据结构可以以有效的方式支持多个页面大小,并且在关键路径中设置少量额外的电路用于地址转换,从而提高性能。

    Multiple page size address translation incorporating page size prediction
    3.
    发明申请
    Multiple page size address translation incorporating page size prediction 失效
    多页尺寸地址转换结合页面大小预测

    公开(公告)号:US20060161758A1

    公开(公告)日:2006-07-20

    申请号:US11035556

    申请日:2005-01-14

    IPC分类号: G06F12/10

    CPC分类号: G06F12/1036 G06F2212/652

    摘要: Page size prediction is used to predict a page size for a page of memory being accessed by a memory access instruction such that the predicted page size can be used to access an address translation data structure. By doing so, an address translation data structure may support multiple page sizes in an efficient manner and with little additional circuitry disposed in the critical path for address translation, thereby increasing performance.

    摘要翻译: 页面大小预测用于预测由存储器访问指令访问的存储器页面的页面大小,使得可以使用预测的页面大小来访问地址转换数据结构。 通过这样做,地址转换数据结构可以以有效的方式支持多个页面大小,并且在关键路径中设置少量额外的电路用于地址转换,从而提高性能。

    Reconfiguring caches to support metadata for polymorphism
    5.
    发明申请
    Reconfiguring caches to support metadata for polymorphism 审中-公开
    重新配置缓存以支持多态性的元数据

    公开(公告)号:US20070083711A1

    公开(公告)日:2007-04-12

    申请号:US11246818

    申请日:2005-10-07

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0893

    摘要: In a method of using a cache in a computer, the computer is monitored to detect an event that indicates that the cache is to be reconfigured into a metadata state. When the event is detected, the cache is reconfigured so that a predetermined portion of the cache stores metadata. A computational circuit employed in association with a computer includes a cache, a cache event detector circuit, and a cache reconfiguration circuit. The cache event detector circuit detects an event relative to the cache. The cache reconfiguration circuit reconfigures the cache so that a predetermined portion of the cache stores metadata when the cache event detector circuit detects the event.

    摘要翻译: 在计算机中使用高速缓存的方法中,监视计算机以检测指示将高速缓存重新配置为元数据状态的事件。 当检测到事件时,重新配置高速缓存,使得高速缓存的预定部分存储元数据。 与计算机相关联地使用的计算电路包括高速缓存,高速缓存事件检测器电路和高速缓存重配置电路。 高速缓存事件检测器电路检测相对于高速缓存的事件。 高速缓存重配置电路重新配置高速缓存,使得当高速缓存事件检测器电路检测到事件时,高速缓存的预定部分存储元数据。

    Apparatus and method for handling data cache misses out-of-order for asynchronous pipelines
    6.
    发明申请
    Apparatus and method for handling data cache misses out-of-order for asynchronous pipelines 失效
    用于处理数据高速缓存的装置和方法对于异步管线错过无序

    公开(公告)号:US20070180221A1

    公开(公告)日:2007-08-02

    申请号:US11345922

    申请日:2006-02-02

    IPC分类号: G06F9/44

    摘要: An apparatus and method for handling data cache misses out-of-order for asynchronous pipelines are provided. The apparatus and method associates load tag (LTAG) identifiers with the load instructions and uses them to track the load instruction across multiple pipelines as an index into a load table data structure of a load target buffer. The load table is used to manage cache “hits” and “misses” and to aid in the recycling of data from the L2 cache. With cache misses, the LTAG indexed load table permits load data to recycle from the L2 cache in any order. When the load instruction issues and sees its corresponding entry in the load table marked as a “miss,” the effects of issuance of the load instruction are canceled and the load instruction is stored in the load table for future reissuing to the instruction pipeline when the required data is recycled.

    摘要翻译: 提供了一种用于处理数据高速缓存的装置和方法,其中异步管线的次序不正常。 该装置和方法将加载标签(LTAG)标识符与加载指令相关联,并使用它们来跟踪跨多个管道的加载指令作为加载目标缓冲区的加载表数据结构的索引。 加载表用于管理缓存“命中”和“未命中”,并帮助从L2缓存回收数据。 由于缓存未命中,LTAG索引的加载表允许加载数据以任何顺序从二级缓存中回收。 当加载指令发出并看到其在负载表中的对应条目标记为“未命中”时,取消加载指令的发布效果,并且加载指令存储在加载表中,以便将来重新发布到指令流水线时 所需数据被回收。

    Compressed cache lines incorporating embedded prefetch history data
    8.
    发明申请
    Compressed cache lines incorporating embedded prefetch history data 有权
    包含嵌入式预取历史数据的压缩缓存行

    公开(公告)号:US20050268046A1

    公开(公告)日:2005-12-01

    申请号:US10857745

    申请日:2004-05-28

    申请人: Timothy Heil

    发明人: Timothy Heil

    IPC分类号: G06F12/00 G06F12/08

    摘要: An apparatus and method utilize compressed cache lines that incorporate embedded prefetch history data associated with such cache lines. In particular, by compressing at least a portion of the data in a cache line, additional space may be freed up in the cache line to embed prefetch history data associated with the data in the cache line. By doing so, long-lived prefetch history data may essentially be embedded in a cache line and retrieved in association with that cache line to initiate the prefetching of additional data that is likely to be accessed based upon historical data generated for that cache line, and often with no little or no additional storage overhead.

    摘要翻译: 装置和方法利用包含与这种高速缓存行相关联的嵌入式预取历史数据的压缩高速缓存行。 特别地,通过压缩高速缓存行中的数据的至少一部分,可以在高速缓存行中释放附加空间以将与数据相关联的预取历史数据嵌入到高速缓存行中。 通过这样做,长寿命的预取历史数据可以基本上嵌入在高速缓存行中并与该高速缓存行相关联地进行检索,以基于为该高速缓存行生成的历史数据启动可能被访问的附加数据的预取,以及 经常没有或没有额外的存储开销。