Method and apparatus for repairing a link stack
    31.
    发明申请
    Method and apparatus for repairing a link stack 审中-公开
    修复链接堆栈的方法和装置

    公开(公告)号:US20070204142A1

    公开(公告)日:2007-08-30

    申请号:US11363072

    申请日:2006-02-27

    IPC分类号: G06F15/00

    摘要: A link stack in a processor is repaired in response to a procedure return address misprediction error. In one example, a link stack for use in a processor is repaired by detecting an error in a procedure return address value retrieved from the link stack and skipping a procedure return address value currently queued for retrieval from the link stack responsive to detecting the error. In one or more embodiments, a link stack circuit comprises a link stack and a link stack pointer. The link stack is configured to store a plurality of procedure return address values. The link stack pointer is configured to skip a procedure return address value currently queued for retrieval from the link stack responsive to an error detected in a procedure return address value previously retrieved from the link stack.

    摘要翻译: 响应于过程返回地址错误预测错误,处理器中的链路堆栈被修复。 在一个示例中,通过检测从链接堆栈检索到的过程返回地址值中的错误来修复处理器中使用的链接栈,并且响应于检测到错误,跳过当前排队等待从链接堆栈检索的过程返回地址值。 在一个或多个实施例中,链路栈电路包括链路栈和链路栈指针。 链路栈被配置为存储多个过程返回地址值。 链路堆栈指针被配置为响应于先前从链路栈检索的过程返回地址值中检测到的错误,跳过当前排队等待从链路栈检索的过程返回地址值。

    Caching instructions for a multiple-state processor
    32.
    发明申请
    Caching instructions for a multiple-state processor 有权
    缓存多状态处理器的指令

    公开(公告)号:US20060265573A1

    公开(公告)日:2006-11-23

    申请号:US11132748

    申请日:2005-05-18

    IPC分类号: G06F9/30

    摘要: A method and apparatus for caching instructions for a processor having multiple operating states. At least two of the operating states of the processor supporting different instruction sets. A block of instructions may be retrieved from memory while the processor is operating in one of the states. The instructions may be pre-decoded in accordance with said one of the states and loaded into cache. The processor, or another entity, may be used to determine whether the current state of the processor is the same as said one of the states used to pre-decode the instructions when one of the pre-decoded instructions in the cache is needed by the processor.

    摘要翻译: 一种用于缓存具有多个操作状态的处理器的指令的方法和装置。 处理器的至少两个操作状态支持不同的指令集。 当处理器在其中一个状态下操作时,可以从存储器检索指令块。 指令可以根据所述一种状态被预先解码并加载到高速缓存中。 处理器或另一个实体可以用于确定处理器的当前状态是否与用于在高速缓存中的预解码指令之一由高速缓存中的一个预解码指令需要时用于对指令进行预解码的状态相同 处理器。

    Handling cache miss in an instruction crossing a cache line boundary
    33.
    发明申请
    Handling cache miss in an instruction crossing a cache line boundary 有权
    处理高速缓存未命中,跨越高速缓存线边界

    公开(公告)号:US20060265572A1

    公开(公告)日:2006-11-23

    申请号:US11132749

    申请日:2005-05-18

    IPC分类号: G06F9/30

    摘要: A fetch section of a processor comprises an instruction cache and a pipeline of several stages for obtaining instructions. Instructions may cross cache line boundaries. The pipeline stages process two addresses to recover a complete boundary crossing instruction. During such processing, if the second piece of the instruction is not in the cache, the fetch with regard to the first line is invalidated and recycled. On this first pass, processing of the address for the second part of the instruction is treated as a pre-fetch request to load instruction data to the cache from higher level memory, without passing any of that data to the later stages of the processor. When the first line address passes through the fetch stages again, the second line address follows in the normal order, and both pieces of the instruction are can be fetched from the cache and combined in the normal manner.

    摘要翻译: 处理器的获取部分包括指令高速缓存和用于获取指令的若干级的流水线。 指令可能会跨越缓存行边界。 流水线处理两个地址以恢复完整的边界交叉指令。 在这种处理过程中,如果第二条指令不在高速缓存中,则关于第一行的提取将被无效再循环。 在该第一遍中,对于指令的第二部分的地址的处理被视为从高级存储器将指令数据加载到高速缓存的预取请求,而不将该数据传递到处理器的后期。 当第一行地址再次通过读取级时,第二行地址以正常顺序跟随,并且可以从高速缓存中取出两条指令并以正常方式进行组合。

    Method and apparatus for managing a return stack
    34.
    发明申请
    Method and apparatus for managing a return stack 有权
    用于管理返回堆栈的方法和装置

    公开(公告)号:US20060190711A1

    公开(公告)日:2006-08-24

    申请号:US11061975

    申请日:2005-02-18

    IPC分类号: G06F9/00

    摘要: A processor includes a return stack circuit used for predicting procedure return addresses for instruction pre-fetching, wherein a return stack controller determines the number of return levels associated with a given return instruction, and pops that number of return addresses from the return stack. Popping multiple return addresses from the return stack permits the processor to pre-fetch the return address of the original calling procedure in a chain of successive procedure calls. In one embodiment, the return stack controller reads the number of return levels from a value embedded in the return instruction. A complementary compiler calculates the return level values for given return instructions and embeds those values in them at compile-time. In another embodiment, the return stack circuit dynamically tracks the number of return levels by counting the procedure calls (branches) in a chain of successive procedure calls.

    摘要翻译: 处理器包括用于预测用于指令预取的过程返回地址的返回堆栈电路,其中返回堆栈控制器确定与给定返回指令相关联的返回电平的数量,并且从返回堆栈中弹出该返回地址的数量。 从返回堆栈弹出多个返回地址允许处理器在连续的过程调用链中预取原始调用过程的返回地址。 在一个实施例中,返回堆栈控制器从嵌入在返回指令中的值读取返回电平的数量。 补充编译器计算给定返回指令的返回值,并在编译时嵌入这些值。 在另一个实施例中,返回堆栈电路通过对连续过程调用链中的过程调用(分支)进行计数来动态地跟踪返回电平的数量。

    Translation lookaside buffer (TLB) suppression for intra-page program counter relative or absolute address branch instructions
    35.
    发明申请
    Translation lookaside buffer (TLB) suppression for intra-page program counter relative or absolute address branch instructions 有权
    翻译后备缓冲器(TLB)抑制用于页内程序计数器相对或绝对地址分支指令

    公开(公告)号:US20060149981A1

    公开(公告)日:2006-07-06

    申请号:US11003772

    申请日:2004-12-02

    IPC分类号: G06F1/32

    摘要: In a pipelined processor, a pre-decoder in advance of an instruction cache calculates the branch target address (BTA) of PC-relative and absolute address branch instructions. The pre-decoder compares the BTA with the branch instruction address (BIA) to determine whether the target and instruction are in the same memory page. A branch target same page (BTSP) bit indicating this is written to the cache and associated with the instruction. When the branch is executed and evaluated as taken, a TLB access to check permission attributes for the BTA is suppressed if the BTA is in the same page as the BIA, as indicated by the BTSP bit. This reduces power consumption as the TLB access is suppressed and the BTA/BIA comparison is only performed once, when the branch instruction is first fetched. Additionally, the pre-decoder removes the BTA/BIA comparison from the BTA generation and selection critical path.

    摘要翻译: 在流水线处理器中,在指令高速缓存之前的预解码器计算PC相对的分支目标地址(BTA)和绝对地址分支指令。 预解码器将BTA与分支指令地址(BIA)进行比较,以确定目标和指令是否在相同的存储器页面中。 指示这一点的分支目标相同页(BTSP)位被写入高速缓存并与指令相关联。 当分支被执行并被评估时,如果BTA与BIA位于同一页面中,则由B BTS指示的TLB访问来检查BTA的许可属性被抑制。 当首先取出分支指令时,这样可以降低TLB访问的功耗,并且仅执行一次BTA / BIA比较。 另外,预解码器从BTA生成和选择关键路径去除BTA / BIA比较。

    DEVICE AND PROCESS FOR DRYING MOVING WEBS OF MATERIAL
    39.
    发明申请
    DEVICE AND PROCESS FOR DRYING MOVING WEBS OF MATERIAL 审中-公开
    干燥移动材料的装置和方法

    公开(公告)号:US20110035958A1

    公开(公告)日:2011-02-17

    申请号:US12736260

    申请日:2009-03-13

    IPC分类号: F26B9/00

    摘要: A drying device for moving webs of material has at least two serial drying groups each having a plurality of heatable drying cylinders. The drying cylinders of at least one first drying group are steam-heated in a first heat circuit. The drying cylinders of at least one second drying group have supply piping for a combustible energy source and discharge piping for waste heat originating from the combustion. A first heat exchanger is provided that couples the first heat circuit for the steam-heated drying cylinders to the discharge piping for the waste heat from the drying cylinders of the second drying group.

    摘要翻译: 用于移动纤维网的干燥装置具有至少两个串联干燥组,每个干燥组具有多个可加热干燥筒。 至少一个第一干燥组的干燥筒在第一热回路中被蒸汽加热。 至少一个第二干燥组的干燥筒具有用于可燃能源的供应管道和用于来自燃烧的废热的排出管道。 提供了第一热交换器,其将用于蒸汽加热的干燥缸的第一热回路耦合到用于来自第二干燥组的干燥筒的废热的排出管道。