Establishing a branch target instruction cache (BTIC) entry for subroutine returns to reduce execution pipeline bubbles, and related systems, methods, and computer-readable media
    11.
    发明授权
    Establishing a branch target instruction cache (BTIC) entry for subroutine returns to reduce execution pipeline bubbles, and related systems, methods, and computer-readable media 有权
    建立用于子程序的分支目标指令缓存(BTIC)条目返回以减少执行管道气泡,以及相关的系统,方法和计算机可读介质

    公开(公告)号:US09317293B2

    公开(公告)日:2016-04-19

    申请号:US13792335

    申请日:2013-03-11

    CPC classification number: G06F9/3808 G06F9/30054

    Abstract: Establishing a branch target instruction cache (BTIC) entry for subroutine returns to reduce pipeline bubbles, and related systems, methods, and computer-readable media are disclosed. In one embodiment, a method of establishing a BTIC entry includes detecting a subroutine call in an execution pipeline. In response, at least one instruction fetched sequential to the subroutine call is written as a branch target instruction in a BTIC entry for a subroutine return. A next instruction fetch address is calculated, and is written into a next instruction fetch address field in the BTIC entry. In this manner, the BTIC may provide correct branch target instruction and next instruction fetch address data for the subroutine return, even if the subroutine return is encountered for the first time or the subroutine is called from different calling locations.

    Abstract translation: 建立用于子程序的分支目标指令缓存(BTIC)条目返回以减少管道气泡,以及相关系统,方法和计算机可读介质。 在一个实施例中,建立BTIC条目的方法包括检测执行流水线中的子程序调用。 作为响应,在子程序返回的BTIC条目中写入与子程序调用顺序取得的至少一个指令作为分支目标指令。 计算下一个指令提取地址,并将其写入BTIC条目中的下一个指令获取地址字段。 以这种方式,即使第一次遇到子程序返回或从不同的呼叫位置调用子程序,BTIC可以为子程序返回提供正确的分支目标指令和下一个指令获取地址数据。

    Methods and apparatus for improving performance of semaphore management sequences across a coherent bus
    12.
    发明授权
    Methods and apparatus for improving performance of semaphore management sequences across a coherent bus 有权
    用于提高整个总线信号量管理序列性能的方法和装置

    公开(公告)号:US09292442B2

    公开(公告)日:2016-03-22

    申请号:US13933337

    申请日:2013-07-02

    CPC classification number: G06F12/0808 G06F12/0811 G06F12/0831 G06F15/173

    Abstract: Techniques are described for a multi-processor having two or more processors that increases the opportunity for a load-exclusive command to take a cache line in an Exclusive state, which results in increased performance when a store-exclusive is executed. A new bus operation read prefer exclusive is used as a hint to other caches that a requesting master is likely to store to the cache line, and, if possible, the other cache should give the line up. In most cases, this will result in the other master giving the line up and the requesting master taking the line Exclusive. In most cases, two or more processors are not performing a semaphore management sequence to the same address at the same time. Thus, a requesting master's load-exclusive is able to take a cache line in the Exclusive state an increased number of times.

    Abstract translation: 针对具有两个或多个处理器的多处理器描述技术,这增加了独占命令以独占状态取高速缓存行的机会,这导致执行存储排他时的性能提高。 读取优先排序的新总线操作被用作对请求主机可能存储到高速缓存行的其他高速缓存的提示,并且如果可能的话,其他高速缓存应该给排队。 在大多数情况下,这将导致其他主人员排队,并且请求主人将线独占。 在大多数情况下,两个或多个处理器不会同时对同一地址执行信号量管理序列。 因此,请求主机的负载排他能够在独占状态下取高速缓存行增加次数。

    Fusing conditional write instructions having opposite conditions in instruction processing circuits, and related processor systems, methods, and computer-readable media
    13.
    发明授权
    Fusing conditional write instructions having opposite conditions in instruction processing circuits, and related processor systems, methods, and computer-readable media 有权
    在指令处理电路中对具有相反条件的条件写指令进行融合,以及相关的处理器系统,方法和计算机可读介质

    公开(公告)号:US09195466B2

    公开(公告)日:2015-11-24

    申请号:US13676146

    申请日:2012-11-14

    CPC classification number: G06F9/3867 G06F9/30043 G06F9/30072 G06F9/3017

    Abstract: Fusing conditional write instructions having opposite conditions in instruction processing circuits and related processor systems, methods, and computer-readable media are disclosed. In one embodiment, a first conditional write instruction writing a first value to a target register based on evaluating a first condition is detected by an instruction processing circuit. The circuit also detects a second conditional write instruction writing a second value to the target register based on evaluating a second condition that is a logical opposite of the first condition. Either the first condition or the second condition is selected as a fused instruction condition, and corresponding values are selected as if-true and if-false values. A fused instruction is generated for selectively writing the if-true value to the target register if the fused instruction condition evaluates to true, and selectively writing the if-false value to the target register if the fused instruction condition evaluates to false.

    Abstract translation: 公开了在指令处理电路和相关处理器系统,方法和计算机可读介质中具有相反条件的条件写指令。 在一个实施例中,由指令处理电路检测基于评估第一条件将第一值写入目标寄存器的第一条件写入指令。 该电路还基于评估与第一条件逻辑相反的第二条件,检测向目标寄存器写入第二值的第二条件写入指令。 选择第一个条件或第二个条件作为融合指令条件,并将相应的值选为if-true和if-false值。 如果融合指令条件评估为真,则生成融合指令,以便如果融合指令条件评估为真,则将if-true值有选择地写入目标寄存器,如果融合指令条件评估为false,则选择性地将if-false值写入目标寄存器。

    PREDICTING MEMORY INSTRUCTION PUNTS IN A COMPUTER PROCESSOR USING A PUNT AVOIDANCE TABLE (PAT)
    14.
    发明申请
    PREDICTING MEMORY INSTRUCTION PUNTS IN A COMPUTER PROCESSOR USING A PUNT AVOIDANCE TABLE (PAT) 审中-公开
    用计算机处理器预防内存指令(PUN)

    公开(公告)号:US20170046167A1

    公开(公告)日:2017-02-16

    申请号:US14863612

    申请日:2015-09-24

    Abstract: Predicting memory instruction punts in a computer processor using a punt avoidance table (PAT) are disclosed. In one aspect, an instruction processing circuit accesses a PAT containing entries each comprising an address of a memory instruction. Upon detecting a memory instruction in an instruction stream, the instruction processing circuit determines whether the PAT contains an entry having an address of the memory instruction. If so, the instruction processing circuit prevents the detected memory instruction from taking effect before at least one pending memory instruction older than the detected memory instruction, to preempt a memory instruction punt. In some aspects, the instruction processing circuit may determine, upon execution of a pending memory instruction, whether a hazard associated with the detected memory instruction has occurred. If so, an entry for the detected memory instruction is generated in the PAT.

    Abstract translation: 公开了使用平底逃避表(PAT)预测计算机处理器中的存储器指令平移。 在一个方面,指令处理电路访问包含条目的PAT,每个条目包括存储器指令的地址。 在检测到指令流中的存储器指令时,指令处理电路确定PAT是否包含具有存储器指令地址的条目。 如果是这样,则指令处理电路防止检测到的存储器指令在比检测到的存储器指令之前的至少一个未决存储器指令之前生效,以抢占存储器指令punt。 在一些方面,指令处理电路可以在执行待决存储器指令时确定是否已经发生与检测到的存储器指令相关联的危险。 如果是,则在PAT中生成检测到的存储器指令的条目。

    Fusing immediate value, write-based instructions in instruction processing circuits, and related processor systems, methods, and computer-readable media
    15.
    发明授权
    Fusing immediate value, write-based instructions in instruction processing circuits, and related processor systems, methods, and computer-readable media 有权
    在指令处理电路中融合即时价值,基于写入的指令,以及相关的处理器系统,方法和计算机可读介质

    公开(公告)号:US09477476B2

    公开(公告)日:2016-10-25

    申请号:US13686229

    申请日:2012-11-27

    CPC classification number: G06F9/3017 G06F9/30167

    Abstract: Fusing immediate value, write-based instructions in instruction processing circuits, and related processor systems, methods, and computer-readable media are disclosed. In one embodiment, a first instruction indicating an operation writing an immediate value to a register is detected by an instruction processing circuit. The circuit also detects at least one subsequent instruction indicating an operation that overwrites at least one first portion of the register while maintaining a value of a second portion of the register. The at least one subsequent instruction is converted (or replaced) with a fused instruction(s), which indicates an operation writing the at least one first portion and the second portion of the register. In this manner, conversion of multiple instructions for generating a constant into the fused instruction(s) removes the potential for a read-after-write hazard and associated consequences caused by dependencies between certain instructions, while reducing a number of clock cycles required to process the instructions.

    Abstract translation: 公开了立即值的融合,指令处理电路中的基于写入的指令以及相关的处理器系统,方法和计算机可读介质。 在一个实施例中,指令处理电路检测指示向寄存器写入立即值的操作的第一指令。 电路还检测至少一个后续指令,指示在保持寄存器的第二部分的值的同时重写寄存器的至少一个第一部分的操作。 所述至少一个后续指令被转换(或替代)与一个融合指令,其指示写入寄存器的至少一个第一部分和第二部分的操作。 以这种方式,将用于产生常数的多个指令转换为融合指令消除了读写后危险和由特定指令之间的依赖性引起的相关后果的可能性,同时减少了处理所需的时钟周期数 说明。

    METHODS AND APPARATUS FOR IMPROVING PERFORMANCE OF SEMAPHORE MANAGEMENT SEQUENCES ACROSS A COHERENT BUS
    16.
    发明申请
    METHODS AND APPARATUS FOR IMPROVING PERFORMANCE OF SEMAPHORE MANAGEMENT SEQUENCES ACROSS A COHERENT BUS 有权
    用于改善相邻总线之间扫描管理序列性能的方法和装置

    公开(公告)号:US20140310468A1

    公开(公告)日:2014-10-16

    申请号:US13933337

    申请日:2013-07-02

    CPC classification number: G06F12/0808 G06F12/0811 G06F12/0831 G06F15/173

    Abstract: Techniques are described for a multi-processor having two or more processors that increases the opportunity for a load-exclusive command to take a cache line in an Exclusive state, which results in increased performance when a store-exclusive is executed. A new bus operation read prefer exclusive is used as a hint to other caches that a requesting master is likely to store to the cache line, and, if possible, the other cache should give the line up. In most cases, this will result in the other master giving the line up and the requesting master taking the line Exclusive. In most cases, two or more processors are not performing a semaphore management sequence to the same address at the same time. Thus, a requesting master's load-exclusive is able to take a cache line in the Exclusive state an increased number of times.

    Abstract translation: 针对具有两个或多个处理器的多处理器描述技术,这增加了独占命令以独占状态取高速缓存行的机会,这导致执行存储排他时的性能提高。 读取优先排序的新总线操作被用作对请求主机可能存储到高速缓存行的其他高速缓存的提示,并且如果可能的话,其他高速缓存应该给排队。 在大多数情况下,这将导致其他主人员排队,并且请求主人将线独占。 在大多数情况下,两个或多个处理器不会同时对同一地址执行信号量管理序列。 因此,请求主机的负载排他能够在独占状态下取高速缓存行增加次数。

    OPTIMIZING PERFORMANCE FOR CONTEXT-DEPENDENT INSTRUCTIONS
    17.
    发明申请
    OPTIMIZING PERFORMANCE FOR CONTEXT-DEPENDENT INSTRUCTIONS 有权
    优化性能的背景相关指示

    公开(公告)号:US20140281405A1

    公开(公告)日:2014-09-18

    申请号:US13841576

    申请日:2013-03-15

    CPC classification number: G06F9/30098 G06F9/30189 G06F9/3842 G06F9/3863

    Abstract: A processor includes a queue for storing instructions processed within the context of a current value of a register field, where for some embodiments the instruction is undefined or defined, depending upon the register field at time of processing. After a write instruction (an instruction that writes to the register field) executes, the queue is searched for any entries that contain instructions that depend upon the executed write instruction. Each such entry stores the value of the register field at the time the instruction in the entry was processed. If such an entry is found in the queue and its stored value of the register field does not match the value that the write instruction wrote to the register field, then the processor flushes the pipeline and restarts at a state so as to correctly execute the instruction.

    Abstract translation: 处理器包括用于存储在寄存器字段的当前值的上下文中处理的指令的队列,其中对于一些实施例,取决于处理时的寄存器字段,指令是未定义的或定义的。 在执行写入指令(写入寄存器字段的指令)之后,将搜索包含依赖于执行的写入指令的指令的任何条目。 每个这样的条目存储处理条目中的指令时的寄存器字段的值。 如果在队列中找到这样的条目,并且其寄存器字段的存储值与写入指令写入寄存器字段的值不匹配,则处理器刷新流水线并在一个状态下重新启动,以便正确地执行指令 。

    METHOD AND APPARATUS FOR FORWARDING LITERAL GENERATED DATA TO DEPENDENT INSTRUCTIONS MORE EFFICIENTLY USING A CONSTANT CACHE
    18.
    发明申请
    METHOD AND APPARATUS FOR FORWARDING LITERAL GENERATED DATA TO DEPENDENT INSTRUCTIONS MORE EFFICIENTLY USING A CONSTANT CACHE 审中-公开
    使用恒定缓存更有效地将文献生成数据转发给相关指令的方法和装置

    公开(公告)号:US20140281391A1

    公开(公告)日:2014-09-18

    申请号:US13827867

    申请日:2013-03-14

    Abstract: A processor to a store constant value (immediate or literal) in a cache upon decoding a move immediate instruction in which the immediate is to be moved (copied or written) to an architected register. The constant value is stored in an entry in the cache. Each entry in the cache includes a field to indicate whether its stored constant value is valid, and a field to associate the entry with an architected register. Once a constant value is stored in the cache, it is immediately available for forwarding to a processor pipeline where a decoded instruction may need the constant value as an operand.

    Abstract translation: 解码将立即数移动(复制或写入)到结构化寄存器的移动即时指令时,在缓存中存储恒定值(立即数或立即数)的处理器。 常量值存储在缓存中的条目中。 缓存中的每个条目包括一个字段,用于指示其存储的常量值是否有效,以及一个字段,用于将条目与架构化寄存器相关联。 一旦常数值被存储在高速缓存中,则立即可以转发到处理器流水线,其中解码的指令可能需要常数值作为操作数。

    METHODS AND APPARATUS FOR MANAGING PAGE CROSSING INSTRUCTIONS WITH DIFFERENT CACHEABILITY
    19.
    发明申请
    METHODS AND APPARATUS FOR MANAGING PAGE CROSSING INSTRUCTIONS WITH DIFFERENT CACHEABILITY 有权
    用于管理具有不同缓存能力的页面交叉指令的方法和设备

    公开(公告)号:US20140089598A1

    公开(公告)日:2014-03-27

    申请号:US13626916

    申请日:2012-09-26

    Abstract: An instruction in an instruction cache line having a first portion that is cacheable, a second portion that is from a page that is non-cacheable, and crosses a cache line is prevented from executing from the instruction cache. An attribute associated with the non-cacheable second portion is tracked separately from the attributes of the rest of the instructions in the cache line. If the page crossing instruction is reached for execution, the page crossing instruction and instructions following are flushed and a non-cacheable request is made to memory for at least the second portion. Once the second portion is received, the whole page crossing instruction is reconstructed from the first portion saved in the previous fetch group. The page crossing instruction or portion thereof is returned with the proper attribute for a non-cached fetched instruction and the reconstructed instruction can be executed without being cached.

    Abstract translation: 具有可高速缓存的第一部分的指令高速缓存行中的指令,来自不可缓存的页面的第二部分和跨越高速缓存行的指令被禁止从指令高速缓存执行。 与不可缓存的第二部分相关联的属性与高速缓存行中的其余指令的属性分开跟踪。 如果到达执行页面交叉指令,则刷新页面交叉指令和指令,并且对至少第二部分对存储器进行不可缓存请求。 一旦接收到第二部分,则从保存在先前取出组中的第一部分重构整个页面交叉指令。 返回页面交叉指令或其一部分具有用于非缓存取出指令的适当属性,并且重建的指令可以被执行而不被缓存。

    FUSING FLAG-PRODUCING AND FLAG-CONSUMING INSTRUCTIONS IN INSTRUCTION PROCESSING CIRCUITS, AND RELATED PROCESSOR SYSTEMS, METHODS, AND COMPUTER-READABLE MEDIA
    20.
    发明申请
    FUSING FLAG-PRODUCING AND FLAG-CONSUMING INSTRUCTIONS IN INSTRUCTION PROCESSING CIRCUITS, AND RELATED PROCESSOR SYSTEMS, METHODS, AND COMPUTER-READABLE MEDIA 审中-公开
    指令处理电路中的熔接生产和标签消费指令以及相关处理器系统,方法和计算机可读介质

    公开(公告)号:US20140047221A1

    公开(公告)日:2014-02-13

    申请号:US13788008

    申请日:2013-03-07

    CPC classification number: G06F9/30181 G06F9/30072

    Abstract: Fusing flag-producing and flag-consuming instructions in instruction processing circuits and related processor systems, methods, and computer-readable media are disclosed. In one embodiment, a flag-producing instruction indicating a first operation generating a first flag result is detected in an instruction stream by an instruction processing circuit. The instruction processing circuit also detects a flag-consuming instruction in the instruction stream indicating a second operation consuming the first flag result as an input. The instruction processing circuit generates a fused instruction indicating the first operation generating the first flag result and indicating the second operation consuming the first flag result as the input. In this manner, as a non-limiting example, the fused instruction eliminates a potential for a read-after-write hazard between the flag-producing instruction and the flag-consuming instruction.

    Abstract translation: 公开了在指令处理电路和相关处理器系统,方法和计算机可读介质中对产生标记和标记消息的指令进行融合。 在一个实施例中,指令处理电路在指令流中检测指示产生第一标志结果的第一操作的标志产生指令。 指令处理电路还检测指示流中指示消耗第一标志结果的第二操作的指令消息指令作为输入。 指令处理电路产生指示第一操作的融合指令,该第一操作产生第一标志结果并指示第二操作消耗第一标志结果作为输入。 以这种方式,作为非限制性示例,融合指令消除了标志产生指令和标志消耗指令之间的写后读取危险的可能性。

Patent Agency Ranking