RESOURCE MANAGEMENT FOR NORTHBRIDGE USING TOKENS
    21.
    发明申请
    RESOURCE MANAGEMENT FOR NORTHBRIDGE USING TOKENS 有权
    使用公园的北部资源管理

    公开(公告)号:US20140189700A1

    公开(公告)日:2014-07-03

    申请号:US13727808

    申请日:2012-12-27

    CPC classification number: G06F13/1642 G06F13/4027

    Abstract: A processor uses a token scheme to govern the maximum number of memory access requests each of a set of processor cores can have pending at a northbridge of the processor. To implement the scheme, the northbridge issues a minimum number of tokens to each of the processor cores and keeps a number of tokens in reserve. In response to determining that a given processor core is generating a high level of memory access activity the northbridge issues some of the reserve tokens to the processor core. The processor core returns the reserve tokens to the northbridge in response to determining that it is not likely to continue to generate the high number of memory access requests, so that the reserve tokens are available to issue to another processor core.

    Abstract translation: 处理器使用令牌方案来管理处理器核心在处理器的北桥处可能具有的一组处理器核心的每个存储器访问请求的最大数量。 为了实施该方案,北桥向每个处理器核心发出最少的令牌数,并保留了一些令牌。 响应于确定给定的处理器核心正在产生高水平的存储器访问活动,北桥向处理器核心发出一些保留令牌。 响应于确定不可能继续生成大量的存储器访问请求,处理器核心将保留令牌返回到北桥,使得保留令牌可用于发布到另一个处理器核心。

    Context partitioning of branch prediction structures

    公开(公告)号:US11734011B1

    公开(公告)日:2023-08-22

    申请号:US15968389

    申请日:2018-05-01

    CPC classification number: G06F9/3806 G06F9/30058 G06F9/45558 G06F2009/45591

    Abstract: A processor core executes a first process. The first process is associated with a first context tag that is generated based on context information controlled by an operating system or hypervisor of the processing system. A branch prediction structure selectively provides the processor core with access to an entry in the branch prediction structure based on the first context tag and a second context tag associated with the entry. The branch prediction structure selectively provides the processor core with access to the entry in response to the first process executing a branch instruction. Tagging entries in the branch prediction structure reduces, or eliminates, aliasing between information used to predict branches taken by different processes at a branch instruction.

    Scheduler queue assignment burst mode

    公开(公告)号:US11334384B2

    公开(公告)日:2022-05-17

    申请号:US16709527

    申请日:2019-12-10

    Abstract: Systems, apparatuses, and methods for implementing scheduler queue assignment burst mode are disclosed. A scheduler queue assignment unit receives a dispatch packet with a plurality of operations from a decode unit in each clock cycle. The scheduler queue assignment unit determines if the number of operations in the dispatch packet for any class of operations is greater than a corresponding threshold for dispatching to the scheduler queues in a single cycle. If the number of operations for a given class is greater than the corresponding threshold, and if a burst mode counter is less than a burst mode window threshold, the scheduler queue assignment unit dispatches the extra number of operations for the given class in a single cycle. By operating in burst mode for a given operation class during a small number of cycles, processor throughput can be increased without starving the processor of other operation classes.

    Branch target buffer with early return prediction

    公开(公告)号:US11055098B2

    公开(公告)日:2021-07-06

    申请号:US16043293

    申请日:2018-07-24

    Abstract: A processor includes a branch target buffer (BTB) having a plurality of entries whereby each entry corresponds to an associated instruction pointer value that is predicted to be a branch instruction. Each BTB entry stores a predicted branch target address for the branch instruction, and further stores information indicating whether the next branch in the block of instructions associated with the predicted branch target address is predicted to be a return instruction. In response to the BTB indicating that the next branch is predicted to be a return instruction, the processor initiates an access to a return stack that stores the return address for the predicted return instruction. By initiating access to the return stack responsive to the return prediction stored at the BTB, the processor reduces the delay in identifying the return address, thereby improving processing efficiency.

    Using return address predictor to speed up control stack return address verification

    公开(公告)号:US10768937B2

    公开(公告)日:2020-09-08

    申请号:US16046949

    申请日:2018-07-26

    Abstract: Overhead associated with verifying function return addresses to protect against security exploits is reduced by taking advantage of branch prediction mechanisms for predicting return addresses. More specifically, returning from a function includes popping a return address from a data stack. Well-known security exploits overwrite the return address on the data stack to hijack control flow. In some processors, a separate data structure referred to as a control stack is used to verify the data stack. When a return instruction is executed, the processor issues an exception if the return addresses on the control stack and the data stack are not identical. This overhead can be avoided by taking advantage of the return address stack, which is a data structure used by the branch predictor to predict return addresses. In most situations, if this prediction is correct, the above check does not need to occur, thus reducing the associated overhead.

    Bandwidth increase in branch prediction unit and level 1 instruction cache

    公开(公告)号:US10127044B2

    公开(公告)日:2018-11-13

    申请号:US14522831

    申请日:2014-10-24

    Abstract: A processor, a device, and a non-transitory computer readable medium for performing branch prediction in a processor are presented. The processor includes a front end unit. The front end unit includes a level 1 branch target buffer (BTB), a BTB index predictor (BIP), and a level 1 hash perceptron (HP). The BTB is configured to predict a target address. The BIP is configured to generate a prediction based on a program counter and a global history, wherein the prediction includes a speculative partial target address, a global history value, a global history shift value, and a way prediction. The HP is configured to predict whether a branch instruction is taken or not taken.

    Resource management for northbridge using tokens

    公开(公告)号:US09697146B2

    公开(公告)日:2017-07-04

    申请号:US13727808

    申请日:2012-12-27

    CPC classification number: G06F13/1642 G06F13/4027

    Abstract: A processor uses a token scheme to govern the maximum number of memory access requests each of a set of processor cores can have pending at a northbridge of the processor. To implement the scheme, the northbridge issues a minimum number of tokens to each of the processor cores and keeps a number of tokens in reserve. In response to determining that a given processor core is generating a high level of memory access activity the northbridge issues some of the reserve tokens to the processor core. The processor core returns the reserve tokens to the northbridge in response to determining that it is not likely to continue to generate the high number of memory access requests, so that the reserve tokens are available to issue to another processor core.

    BANDWIDTH INCREASE IN BRANCH PREDICTION UNIT AND LEVEL 1 INSTRUCTION CACHE
    29.
    发明申请
    BANDWIDTH INCREASE IN BRANCH PREDICTION UNIT AND LEVEL 1 INSTRUCTION CACHE 审中-公开
    分支预测单元和第1级指令高速缓存中的带宽增长

    公开(公告)号:US20150121050A1

    公开(公告)日:2015-04-30

    申请号:US14522831

    申请日:2014-10-24

    CPC classification number: G06F9/3806 G06F9/30058 G06F9/3848

    Abstract: A processor, a device, and a non-transitory computer readable medium for performing branch prediction in a processor are presented. The processor includes a front end unit. The front end unit includes a level 1 branch target buffer (BTB), a BTB index predictor (BIP), and a level 1 hash perceptron (HP). The BTB is configured to predict a target address. The BIP is configured to generate a prediction based on a program counter and a global history, wherein the prediction includes a speculative partial target address, a global history value, a global history shift value, and a way prediction. The HP is configured to predict whether a branch instruction is taken or not taken.

    Abstract translation: 提出了一种用于在处理器中执行分支预测的处理器,设备和非暂时性计算机可读介质。 处理器包括前端单元。 前端单元包括1级分支目标缓冲器(BTB),BTB索引预测器(BIP)和1级散列感知器(HP)。 BTB被配置为预测目标地址。 BIP被配置为基于程序计数器和全局历史生成预测,其中预测包括推测性部分目标地址,全局历史值,全局历史偏移值和路径预测。 HP配置为预测是否采用分支指令。

    DYNAMIC EVALUATION AND RECONFIGURATION OF A DATA PREFETCHER
    30.
    发明申请
    DYNAMIC EVALUATION AND RECONFIGURATION OF A DATA PREFETCHER 有权
    数据预处理的动态评估和重新配置

    公开(公告)号:US20140129780A1

    公开(公告)日:2014-05-08

    申请号:US13671801

    申请日:2012-11-08

    Abstract: Methods and systems for prefetching data for a processor are provided. A system is configured for and a method includes selecting one of a first prefetching control logic and a second prefetching control logic of the processor as a candidate feature, capturing the performance metric of the processor over an inactive sample period when the candidate feature is inactive, capturing a performance metric of the processor over an active sample period when the candidate feature is active, comparing the performance metric of the processor for the active and inactive sample periods, and setting a status of the candidate feature as enabled when the performance metric in the active period indicates improvement over the performance metric in the inactive period, and as disabled when the performance metric in the inactive period indicates improvement over the performance metric in the active period.

    Abstract translation: 提供了用于为处理器预取数据的方法和系统。 系统被配置用于并且方法包括选择处理器的第一预取控制逻辑和第二预取控制逻辑之一作为候选特征,当候选特征不活动时,在非活动采样周期捕获处理器的性能度量, 当候选特征处于活动状态时,在活动采样周期捕获处理器的性能度量,比较处于活动和非活动采样周期的处理器的性能度量,并且将候选特征的状态设置为使能时的性能度量 活动期间表示在非活动期间的性能指标改善,当非活动期间的性能指标表示改善了活动期间的绩效指标时被禁用。

Patent Agency Ranking