STORE-TO-LOAD FORWARDING MECHANISM FOR PROCESSOR RUNAHEAD MODE OPERATION
    1.
    发明申请
    STORE-TO-LOAD FORWARDING MECHANISM FOR PROCESSOR RUNAHEAD MODE OPERATION 失效
    用于处理器RUNAHEAD模式操作的存储加载转发机制

    公开(公告)号:US20100199045A1

    公开(公告)日:2010-08-05

    申请号:US12364984

    申请日:2009-02-03

    IPC分类号: G06F12/08 G06F9/312

    摘要: A system and method to optimize runahead operation for a processor without use of a separate explicit runahead cache structure. Rather than simply dropping store instructions in a processor runahead mode, store instructions write their results in an existing processor store queue, although store instructions are not allowed to update processor caches and system memory. Use of the store queue during runahead mode to hold store instruction results allows more recent runahead load instructions to search retired store queue entries in the store queue for matching addresses to utilize data from the retired, but still searchable, store instructions. Retired store instructions could be either runahead store instructions retired, or retired store instructions that executed before entering runahead mode.

    摘要翻译: 一种用于在不使用单独的显式跑道缓存结构的情况下优化处理器的跑步头操作的系统和方法。 尽管存储指令不允许更新处理器缓存和系统存储器,但存储指令将其结果写入现有的处理器存储队列中,而不是简单地将存储指令放在处理器跑头模式中。 在跑步模式期间使用存储队列来保存存储指令结果允许更多的最新跑步加载指令来搜索存储队列中的退出存储队列条目以匹配地址以利用来自已退休但仍可搜索的存储指令的数据。 退休存储指令可以是退出存储指令退出,或退出存储指令,在进入排头模式之前执行。

    Store-to-load forwarding mechanism for processor runahead mode operation
    2.
    发明授权
    Store-to-load forwarding mechanism for processor runahead mode operation 失效
    存储到负载转发机制,用于处理器跑头模式操作

    公开(公告)号:US08639886B2

    公开(公告)日:2014-01-28

    申请号:US12364984

    申请日:2009-02-03

    IPC分类号: G06F12/08

    摘要: A system and method to optimize runahead operation for a processor without use of a separate explicit runahead cache structure. Rather than simply dropping store instructions in a processor runahead mode, store instructions write their results in an existing processor store queue, although store instructions are not allowed to update processor caches and system memory. Use of the store queue during runahead mode to hold store instruction results allows more recent runahead load instructions to search retired store queue entries in the store queue for matching addresses to utilize data from the retired, but still searchable, store instructions. Retired store instructions could be either runahead store instructions retired, or retired store instructions that executed before entering runahead mode.

    摘要翻译: 一种用于在不使用单独的显式跑道缓存结构的情况下优化处理器的跑步头操作的系统和方法。 尽管存储指令不允许更新处理器缓存和系统存储器,但存储指令将其结果写入现有的处理器存储队列中,而不是简单地将存储指令放在处理器跑头模式中。 在跑步模式期间使用存储队列来保存存储指令结果允许更多的最新跑步加载指令来搜索存储队列中的退出存储队列条目以匹配地址以利用来自已退休但仍可搜索的存储指令的数据。 退休存储指令可以是退出存储指令退出,或退出存储指令,在进入排头模式之前执行。

    Data reorganization in non-uniform cache access caches
    3.
    发明授权
    Data reorganization in non-uniform cache access caches 有权
    非均匀缓存访问缓存中的数据重组

    公开(公告)号:US08140758B2

    公开(公告)日:2012-03-20

    申请号:US12429754

    申请日:2009-04-24

    IPC分类号: G06F15/163

    CPC分类号: G06F12/0846 G06F12/0811

    摘要: Embodiments that dynamically reorganize data of cache lines in non-uniform cache access (NUCA) caches are contemplated. Various embodiments comprise a computing device, having one or more processors coupled with one or more NUCA cache elements. The NUCA cache elements may comprise one or more banks of cache memory, wherein ways of the cache are horizontally distributed across multiple banks. To improve access latency of the data by the processors, the computing devices may dynamically propagate cache lines into banks closer to the processors using the cache lines. To accomplish such dynamic reorganization, embodiments may maintain “direction” bits for cache lines. The direction bits may indicate to which processor the data should be moved. Further, embodiments may use the direction bits to make cache line movement decisions.

    摘要翻译: 预期在非均匀缓存访问(NUCA)高速缓存中动态地重组高速缓存线的数据的实施例。 各种实施例包括具有与一个或多个NUCA高速缓存元件耦合的一个或多个处理器的计算设备。 NUCA高速缓存元件可以包括一个或多个高速缓冲存储器组,其中高速缓存的方式在多个存储体之间水平分布。 为了改善处理器对数据的访问等待时间,计算设备可以使用高速缓存行来将缓存线路动态地传播到更靠近处理器的存储体中。 为了实现这种动态重组,实施例可以保持高速缓存行的“方向”位。 方向位可以指示哪个处理器应该移动数据。 此外,实施例可以使用方向位来进行高速缓存行移动决定。

    Effective prefetching with multiple processors and threads
    4.
    发明授权
    Effective prefetching with multiple processors and threads 失效
    有效的预取与多个处理器和线程

    公开(公告)号:US08200905B2

    公开(公告)日:2012-06-12

    申请号:US12192072

    申请日:2008-08-14

    IPC分类号: G06F13/00

    摘要: A processing system includes a memory and a first core configured to process applications. The first core includes a first cache. The processing system includes a mechanism configured to capture a sequence of addresses of the application that miss the first cache in the first core and to place the sequence of addresses in a storage array; and a second core configured to process at least one software algorithm. The at least one software algorithm utilizes the sequence of addresses from the storage array to generate a sequence of prefetch addresses. The second core issues prefetch requests for the sequence of the prefetch addresses to the memory to obtain prefetched data and the prefetched data is provided to the first core if requested.

    摘要翻译: 处理系统包括被配置为处理应用的存储器和第一核心。 第一个核心包括第一个缓存。 处理系统包括被配置为捕获错过第一核心中的第一高速缓存的应用程序的地址序列并将地址序列放置在存储阵列中的机制; 以及被配置为处理至少一个软件算法的第二核心。 所述至少一个软件算法利用来自存储阵列的地址序列来生成预取地址序列。 第二个核心将预取地址序列的预取请求发送到存储器以获得预取数据,并且如果请求,则将预取数据提供给第一核。

    DATA REORGANIZATION IN NON-UNIFORM CACHE ACCESS CACHES
    5.
    发明申请
    DATA REORGANIZATION IN NON-UNIFORM CACHE ACCESS CACHES 有权
    非均匀缓存访问缓存中的数据重组

    公开(公告)号:US20100274973A1

    公开(公告)日:2010-10-28

    申请号:US12429754

    申请日:2009-04-24

    IPC分类号: G06F12/08 G06F12/00

    CPC分类号: G06F12/0846 G06F12/0811

    摘要: Embodiments that dynamically reorganize data of cache lines in non-uniform cache access (NUCA) caches are contemplated. Various embodiments comprise a computing device, having one or more processors coupled with one or more NUCA cache elements. The NUCA cache elements may comprise one or more banks of cache memory, wherein ways of the cache are horizontally distributed across multiple banks. To improve access latency of the data by the processors, the computing devices may dynamically propagate cache lines into banks closer to the processors using the cache lines. To accomplish such dynamic reorganization, embodiments may maintain “direction” bits for cache lines. The direction bits may indicate to which processor the data should be moved. Further, embodiments may use the direction bits to make cache line movement decisions.

    摘要翻译: 预期在非均匀缓存访问(NUCA)高速缓存中动态地重组高速缓存线的数据的实施例。 各种实施例包括具有与一个或多个NUCA高速缓存元件耦合的一个或多个处理器的计算设备。 NUCA高速缓存元件可以包括一个或多个高速缓冲存储器组,其中高速缓存的方式在多个存储体之间水平分布。 为了改善处理器对数据的访问等待时间,计算设备可以使用高速缓存行来将缓存线路动态地传播到更靠近处理器的存储体中。 为了实现这种动态重组,实施例可以保持高速缓存行的“方向”位。 方向位可以指示哪个处理器应该移动数据。 此外,实施例可以使用方向位来进行高速缓存行移动决定。

    Prefetching with multiple processors and threads via a coherency bus
    6.
    发明授权
    Prefetching with multiple processors and threads via a coherency bus 失效
    通过一个一致性总线预取多个处理器和线程

    公开(公告)号:US08543767B2

    公开(公告)日:2013-09-24

    申请号:US13488215

    申请日:2012-06-04

    IPC分类号: G06F13/00

    摘要: A processing system includes a memory and a first core configured to process applications. The first core includes a first cache. The processing system includes a mechanism configured to capture a sequence of addresses of the application that miss the first cache in the first core and to place the sequence of addresses in a storage array; and a second core configured to process at least one software algorithm. The at least one software algorithm utilizes the sequence of addresses from the storage array to generate a sequence of prefetch addresses. The second core issues prefetch requests for the sequence of the prefetch addresses to the memory to obtain prefetched data and the prefetched data is provided to the first core if requested.

    摘要翻译: 处理系统包括被配置为处理应用的存储器和第一核心。 第一个核心包括第一个缓存。 处理系统包括被配置为捕获错过第一核心中的第一高速缓存的应用程序的地址序列并将地址序列放置在存储阵列中的机制; 以及被配置为处理至少一个软件算法的第二核心。 所述至少一个软件算法利用来自存储阵列的地址序列来生成预取地址序列。 第二个核心将预取地址序列的预取请求发送到存储器以获得预取数据,并且如果请求,则将预取数据提供给第一核。

    EFFECTIVE PREFETCHING WITH MULTIPLE PROCESSORS AND THREADS
    7.
    发明申请
    EFFECTIVE PREFETCHING WITH MULTIPLE PROCESSORS AND THREADS 失效
    有效的预处理与多个处理器和螺纹

    公开(公告)号:US20120246406A1

    公开(公告)日:2012-09-27

    申请号:US13488215

    申请日:2012-06-04

    IPC分类号: G06F12/08

    摘要: A processing system includes a memory and a first core configured to process applications. The first core includes a first cache. The processing system includes a mechanism configured to capture a sequence of addresses of the application that miss the first cache in the first core and to place the sequence of addresses in a storage array; and a second core configured to process at least one software algorithm. The at least one software algorithm utilizes the sequence of addresses from the storage array to generate a sequence of prefetch addresses. The second core issues prefetch requests for the sequence of the prefetch addresses to the memory to obtain prefetched data and the prefetched data is provided to the first core if requested.

    摘要翻译: 处理系统包括被配置为处理应用的存储器和第一核心。 第一个核心包括第一个缓存。 处理系统包括被配置为捕获错过第一核心中的第一高速缓存的应用程序的地址序列并将地址序列放置在存储阵列中的机制; 以及被配置为处理至少一个软件算法的第二核心。 所述至少一个软件算法利用来自存储阵列的地址序列来生成预取地址序列。 第二个核心将预取地址序列的预取请求发送到存储器以获得预取数据,并且如果请求,则将预取数据提供给第一核。

    Write bandwidth management for flash devices
    8.
    发明授权
    Write bandwidth management for flash devices 有权
    为闪存设备写入带宽管理

    公开(公告)号:US09081504B2

    公开(公告)日:2015-07-14

    申请号:US13339685

    申请日:2011-12-29

    IPC分类号: G06F13/37 G06F3/06 G06F9/50

    摘要: Embodiments of the present invention provide a flash memory device write-access management amongst different virtual machines (VMs) in a virtualized computing environment. In one embodiment, a virtualized computing data processing system can include a host computer with at least one processor and memory and different VMs executing in the host computer. The system also can include a flash memory device coupled to the host computer and accessible by the VMs. Finally, a flash memory controller can manage access to the flash memory device. The controller can include program code enabled to compute a contemporaneous bandwidth of requests for write operations for the flash memory device, to allocate a corresponding number of tokens to the VMs, to accept write requests to the flash memory device from the VMs only when accompanied by a token and to repeat the computing, allocating and accepting after a lapse of a pre-determined time period.

    摘要翻译: 本发明的实施例提供了在虚拟化计算环境中的不同虚拟机(VM)之间的闪存设备写访问管理。 在一个实施例中,虚拟化计算数据处理系统可以包括具有至少一个处理器和存储器的主计算机以及在主计算机中执行的不同VM。 该系统还可以包括耦合到主机并且可由VM访问的闪存设备。 最后,闪存控制器可以管理对闪存设备的访问。 控制器可以包括能够计算用于闪速存储器设备的写入操作的同时期带宽的程序代码,以向VM分配相应数量的令牌,以便仅在VM附带时才从VM接受对闪存设备的写入请求 令牌,并在经过预定时间段之后重复计算,分配和接受。

    Predictors with adaptive prediction threshold
    9.
    发明授权
    Predictors with adaptive prediction threshold 失效
    具有自适应预测阈值的预测器

    公开(公告)号:US08078852B2

    公开(公告)日:2011-12-13

    申请号:US12473764

    申请日:2009-05-28

    CPC分类号: G06F9/3848

    摘要: An adaptive prediction threshold scheme for dynamically adjusting prediction thresholds of entries in a Pattern History Table (PHT) by observing global tendencies of the branch or branches that index into the PHT entries. A count value of a prediction state counter representing a prediction state of a prediction state machine for a PHT entry is obtained. Count values in a set of counters allocated to the entry in the PHT are changed based on the count value of the entry's prediction state counter. The prediction threshold of the prediction state machine for the entry may then be adjusted based on the changed count values in the set of counters, wherein the prediction threshold is adjusted by changing a count value in a prediction threshold counter in the entry, and wherein adjusting the prediction threshold redefines predictions provided by the prediction state machine.

    摘要翻译: 一种自适应预测阈值方案,用于通过观察索引到PHT条目中的分支或分支的全局倾向来动态地调整模式历史表(PHT)中条目的预测阈值。 获得表示PHT条目的预测状态机的预测状态的预测状态计数器的计数值。 分配给PHT中的条目的一组计数器中的计数值根据条目的预测状态计数器的计数值而改变。 然后可以基于该组计数器中的改变的计数值来调整用于该条目的预测状态机的预测阈值,其中通过改变条目中的预测阈值计数器中的计数值来调整预测阈值,并且其中调整 预测阈值重新定义了由预测状态机提供的预测。

    CACHE MANAGEMENT FOR A NUMBER OF THREADS
    10.
    发明申请
    CACHE MANAGEMENT FOR A NUMBER OF THREADS 失效
    多个线程的高速缓存管理

    公开(公告)号:US20110138129A1

    公开(公告)日:2011-06-09

    申请号:US12633976

    申请日:2009-12-09

    IPC分类号: G06F12/08 G06F12/00

    CPC分类号: G06F12/0842

    摘要: The illustrative embodiments provide a method, a computer program product, and an apparatus for managing a cache. A probability of a future request for data to be stored in a portion of the cache by a thread is identified for each of the number of threads to form a number of probabilities. The data is stored with a rank in a number of ranks in the portion of the cache responsive to receiving the future request from the thread in the number of threads for the data. The rank is selected using the probability in the number of probabilities for the thread.

    摘要翻译: 说明性实施例提供了一种方法,计算机程序产品和用于管理高速缓存的装置。 针对线程数量的每一个标识未来要求将数据存储在线程的一部分高速缓存中的概率,以形成多个概率。 该数据以缓存部分中的多个等级排列存储,响应于在数据线程中从线程接收将来的请求。 使用线程概率的概率来选择等级。