Local stall control method and structure in a microprocessor
    1.
    发明授权
    Local stall control method and structure in a microprocessor 有权
    微处理器中的局部失速控制方法和结构

    公开(公告)号:US06279100B1

    公开(公告)日:2001-08-21

    申请号:US09204535

    申请日:1998-12-03

    IPC分类号: G06F938

    摘要: A processor implements a local stall functionality in which small, independent circuit units are stalled locally with the condition causing a stall being first detected locally, then propagated to other small independent circuit units. Stall conditions for a functional unit are detected locally with reduced logic circuitry and also without waiting to receive condition information from other functional units that is transmitted over long wires. Local stall logic circuits are distributed over diverse areas of an integrated circuit so that stall conditions are detected locally. A local stall is expanded into a global stall by propagation to logic circuits beyond a local region in subsequent cycles. Local detection of stall conditions and local stalling eliminates many critical paths in the processor.

    摘要翻译: 处理器实现局部失速功能,其中小的独立电路单元在本地停止,导致局部首先检测到失速的状态,然后传播到其他小的独立电路单元。 通过减少的逻辑电路本地检测功能单元的失速条件,并且也不等待从通过长导线传输的其他功能单元接收条件信息。 局部失速逻辑电路分布在集成电路的不同区域,使得局部检测失速状态。 通过在随后的周期内传播到局部区域以外的逻辑电路,将局部扩展扩展为全局失速。 局部检测失速条件和局部停止消除了处理器中的许多关键路径。

    Sending both a load instruction and retrieved data from a load buffer to an annex prior to forwarding the load data to register file
    2.
    发明授权
    Sending both a load instruction and retrieved data from a load buffer to an annex prior to forwarding the load data to register file 有权
    在将加载数据转发到寄存器文件之前,将加载指令和检索到的数据从加载缓冲区发送到附件

    公开(公告)号:US06542988B1

    公开(公告)日:2003-04-01

    申请号:US09410842

    申请日:1999-10-01

    IPC分类号: G06F938

    摘要: A processor performs precise trap handling for out-of-order and speculative load instructions. It keeps track of the age of load instructions in a shared scheme that includes a load buffer and a load annex. All precise exceptions are detected in a T phase of a load pipeline. Data and control information concerning load operations that hit in the data cache are staged in a load annex during the A1, A2, A3, and T pipeline stages until all exceptions in the same or earlier instruction packet are detected. Data and control information from all other load instructions is staged in the load annex after the load data is retrieved. Before the load data is retrieved, the load instruction is kept in a load buffer. If an exception occurs, any load in the same instruction packet as the instruction causing the exception is canceled. Any load instructions that are “younger” than the instruction that caused the exception are also canceled. The age of load instructions is determined by tracking the pipe stages of the instruction. When a trap occurs, any load instruction with a non-zero age indicator is canceled.

    摘要翻译: 处理器执行精确的陷阱处理,用于无序和推测的加载指令。 它跟踪包含加载缓冲区和加载附件的共享方案中的加载指令的时间。 在负载管线的T相中检测到所有精确异常。 在A1,A2,A3和T流水线阶段期间,有关在数据高速缓存中打入的加载操作的数据和控制信息在负载附件中分段,直到检测到相同或较早的指令包中的所有异常。 在检索负载数据后,所有其他装载指令的数据和控制信息都将在装载附件中进行。 在检索加载数据之前,将加载指令保存在加载缓冲区中。 如果发生异常,与导致异常的指令相同的指令包中的任何负载都将被取消。 任何比引起异常的指令“年轻”的加载指令也被取消。 加载指令的年龄是通过跟踪指令的管道段来确定的。 发生陷阱时,将取消带有非零年龄指示符的任何加载指令。

    Logical power throttling of instruction decode rate for successive time periods
    3.
    发明授权
    Logical power throttling of instruction decode rate for successive time periods 有权
    连续时间段的逻辑功率节制指令解码速率

    公开(公告)号:US08745419B2

    公开(公告)日:2014-06-03

    申请号:US13529761

    申请日:2012-06-21

    IPC分类号: G06F1/32 G06F9/30

    摘要: A processor includes a device providing a throttling power output signal. The throttling power output signal is used to determine when to logically throttle the power consumed by the processor. At least one core in the processor includes a pipeline having a decode pipe; and a logical power throttling unit coupled to the device to receive the output signal, and coupled to the decode pipe. Following the logical power throttling unit receiving the power throttling output signal satisfying a predetermined criterion, the logical power throttling unit causes the decode pipe to reduce an average number of instructions decoded per processor cycle without physically changing the processor cycle or any processor supply voltages.

    摘要翻译: 处理器包括提供节流功率输出信号的装置。 节流电源输出信号用于确定何时逻辑地调节处理器消耗的功率。 处理器中的至少一个核心包括具有解码管道的管线; 以及耦合到所述设备以接收所述输出信号并且耦合到所述解码管的逻辑功率节流单元。 在接收到满足预定标准的功率节流输出信号的逻辑功率节流单元之后,逻辑功率节流单元使得解码管减少在每个处理器周期解码的平均指令数,而不会物理地改变处理器周期或任何处理器供电电压。

    METHOD AND SYSTEM FOR EFFICIENT AND EXHAUSTIVE URL CATEGORIZATION
    4.
    发明申请
    METHOD AND SYSTEM FOR EFFICIENT AND EXHAUSTIVE URL CATEGORIZATION 有权
    用于有效和排他性URL分类的方法和系统

    公开(公告)号:US20120271941A1

    公开(公告)日:2012-10-25

    申请号:US13515079

    申请日:2010-12-08

    IPC分类号: G06F15/173

    CPC分类号: H04L67/22 G06F17/30876

    摘要: The present method and system relate to categorizing URLs (Uniform Resource Locators) of web pages accessed by multiple users over an IP (Internet Protocol) based data network. The method and system collect real time data from IP data traffic occurring on the IP based data network, and extract parameters from the collected real time data, the parameters including an URL of a web page. The URL is processed by a rule based categorization engine, to associate a matching category to the URL of the web page. When no matching category is inferred, the URL is transferred to a semantic based categorization engine. A matching category is associated to the transferred URL by the semantic based categorization engine, based on a semantic analysis of the textual content extracted from the web page associated to the URL.

    摘要翻译: 本方法和系统涉及通过基于IP(基于因特网协议)的数据网络对由多个用户访问的网页的URL(统一资源定位符)进行分类。 该方法和系统从基于IP的数据网络上发生的IP数据流量收集实时数据,并从收集的实时数据中提取参数,参数包括网页的URL。 URL由基于规则的分类引擎处理,以将匹配的类别与网页的URL相关联。 当不推测出匹配类别时,URL被传送到基于语义的分类引擎。 基于从与URL相关联的网页提取的文本内容的语义分析,基于语义的分类引擎将匹配的类别与传送的URL相关联。

    Clotheslines
    5.
    发明授权
    Clotheslines 有权
    晒衣绳

    公开(公告)号:US07878342B1

    公开(公告)日:2011-02-01

    申请号:US12218641

    申请日:2008-07-17

    IPC分类号: D06F53/00

    CPC分类号: D06F53/00 D06F53/04

    摘要: A clothesline system comprises at least two separate cables that are independently tensionable through separate cable tensioning devices. The tension devices are attached together to provide for common, parallel movement of the separate cables though the cables are separately passed around separate pulleys at the both ends of the system. The two separate cables add strength to the system. The separate cables are preferably wound in left and right windings to prevent unraveling of the braid of the cable. By providing separate cables, assembly of the system is less complex as the two loops of cable are separate.

    摘要翻译: 晾衣绳系统包括通过单独的电缆张紧装置独立地张紧的至少两根单独的电缆。 张力装置连接在一起,以提供单独的电缆的通常的平行移动,尽管电缆分别通过系统两端的分离的滑轮。 两根单独的电缆增加了系统的强度。 单独的电缆优选地缠绕在左右绕组中,以防止电缆的编织层松开。 通过提供单独的电缆,系统的组装不太​​复杂,因为电缆的两个环路是分开的。

    METHOD AND STRUCTURE FOR SOLVING THE EVIL-TWIN PROBLEM
    6.
    发明申请
    METHOD AND STRUCTURE FOR SOLVING THE EVIL-TWIN PROBLEM 有权
    解决双向问题的方法与结构

    公开(公告)号:US20100268919A1

    公开(公告)日:2010-10-21

    申请号:US12426550

    申请日:2009-04-20

    IPC分类号: G06F9/30

    摘要: A register file, in a processor, includes a first plurality of registers of a first size, n-bits. A decoder uses a mapping that divides the register file into a second plurality M of registers having a second size. Each of the registers having the second size is assigned a different name in a continuous name space. Each register of the second size includes a plurality N of registers of the first size, n-bits. Each register in the plurality N of registers is assigned the same name as the register of the second size that includes that plurality. State information is maintained in the register file for each n-bit register. The dependence of an instruction on other instructions is detected through the continuous name space. The state information allows the processor to determine when the information in any portion, or all, of a register is valid.

    摘要翻译: 在处理器中的寄存器文件包括第一大小的n位的第一多个寄存器。 解码器使用将寄存器文件分成具有第二大小的第二多个寄存器M的映射。 具有第二大小的每个寄存器在连续的名称空间中被分配不同的名称。 第二大小的每个寄存器包括多个N个第一大小的寄存器,n位。 多个N个寄存器中的每个寄存器被分配与包括该多个寄存器的第二大小的寄存器相同的名称。 状态信息保存在每个n位寄存器的寄存器文件中。 通过连续名称空间检测指令对其他指令的依赖性。 状态信息允许处理器确定寄存器的任何部分或全部中的信息何时有效。

    CHECKPOINTING IN A PROCESSOR THAT SUPPORTS SIMULTANEOUS SPECULATIVE THREADING
    7.
    发明申请
    CHECKPOINTING IN A PROCESSOR THAT SUPPORTS SIMULTANEOUS SPECULATIVE THREADING 审中-公开
    在支持同时进行线性加工的处理器中进行检查

    公开(公告)号:US20100031084A1

    公开(公告)日:2010-02-04

    申请号:US12185683

    申请日:2008-08-04

    IPC分类号: G06F11/00

    摘要: Embodiments of the present invention provide a system for executing program code on a processor. In these embodiments, the processor is configured to start by using a primary strand to execute program code. Upon detecting a predetermined condition, the processor is configured to instantaneously checkpoint an architectural state of the primary strand and then use the subordinate strand to copy the checkpointed state to memory while using the primary strand to continue executing the program code without interruption.

    摘要翻译: 本发明的实施例提供一种用于在处理器上执行程序代码的系统。 在这些实施例中,处理器被配置为通过使用主链来执行程序代码。 在检测到预定条件时,处理器被配置为立即检查主链的架构状态,然后使用下级链将检查点状态复制到存储器,同时使用主链继续执行程序代码而不中断。

    Preventing register data flow hazards in an SST processor
    8.
    发明授权
    Preventing register data flow hazards in an SST processor 有权
    防止SST处理器中的寄存器数据流危害

    公开(公告)号:US07610470B2

    公开(公告)日:2009-10-27

    申请号:US11703462

    申请日:2007-02-06

    IPC分类号: G06F9/38

    摘要: One embodiment of the present invention provides a system that prevents data hazards during simultaneous speculative threading. The system starts by executing instructions in an execute-ahead mode using a first thread. While executing instructions in the execute-ahead mode, the system maintains dependency information for each register indicating whether the register is subject to an unresolved data dependency. Upon the resolution of a data dependency during execute-ahead mode, the system copies dependency information to a speculative copy of the dependency information. The system then commences execution of the deferred instructions in a deferred mode using a second thread. While executing instructions in the deferred mode, if the speculative copy of the dependency information for a destination register indicates that a write-after-write (WAW) hazard exists with a subsequent non-deferred instruction executed by the first thread in execute-ahead mode, the system uses the second thread to execute the deferred instruction to produce a result and forwards the result to be used by subsequent deferred instructions without committing the result to the architectural state of the destination register. Hence, the system makes the result available to the subsequent deferred instructions without overwriting the result produced by a following non-deferred instruction.

    摘要翻译: 本发明的一个实施例提供一种在同时推测的线程中防止数据危害的系统。 系统通过使用第一个线程以执行模式执行指令来启动。 在执行执行模式下执行指令时,系统维护每个寄存器的依赖信息,指示寄存器是否受到未解析的数据依赖。 在执行提前模式下解析数据依赖关系时,系统将依赖关系信息复制到依赖关系信息的推测性副本。 然后,系统使用第二个线程以延迟模式开始执行延迟指令。 在延迟模式下执行指令时,如果目的寄存器的依赖关系信息的推测性副本指示在执行提前模式下由第一线程执行的后续非延迟指令存在写后写入(WAW)危险 ,系统使用第二个线程执行延迟指令以产生结果,并转发后续延迟指令使用的结果,而不将结果提交到目标寄存器的体系结构状态。 因此,系统使结果可用于后续延期指令,而不会覆盖由以下非延迟指令产生的结果。

    Generation of multiple checkpoints in a processor that supports speculative execution
    9.
    发明授权
    Generation of multiple checkpoints in a processor that supports speculative execution 有权
    在支持推测性执行的处理器中生成多个检查点

    公开(公告)号:US07571304B2

    公开(公告)日:2009-08-04

    申请号:US11084655

    申请日:2005-03-18

    摘要: One embodiment of the present invention provides a system which creates multiple checkpoints in a processor that supports speculative-execution. The system starts by issuing instructions for execution in program order during execution of a program in a normal-execution mode. Upon encountering a launch condition during an instruction which causes a processor to enter execute-ahead mode, the system performs an initial checkpoint and commences execution of instructions in execute-ahead mode. Upon encountering a predefined condition during execute-ahead mode, the system generates an additional checkpoint and continues to execute instructions in execute-ahead mode. Generating the additional checkpoint allows the processor to return to the additional checkpoint, instead of the previous checkpoint, if the processor subsequently encounters a condition that requires the processor to return to a checkpoint.

    摘要翻译: 本发明的一个实施例提供一种在支持推测执行的处理器中创建多个检查点的系统。 系统以正常执行模式在程序执行期间以程序顺序发出指令来开始。 在使处理器进入执行模式的指令期间遇到启动条件时,系统执行初始检查点并以执行提前模式开始执行指令。 在执行提前模式期间遇到预定义的条件时,系统生成附加检查点,并以执行提前模式继续执行指令。 如果处理器随后遇到需要处理器返回到检查点的条件,则生成附加检查点将允许处理器返回到附加检查点,而不是先前检查点。

    Method and apparatus for enforcing memory reference ordering requirements at the L1 cache level
    10.
    发明授权
    Method and apparatus for enforcing memory reference ordering requirements at the L1 cache level 有权
    在L1高速缓存级别执行存储器引用排序要求的方法和装置

    公开(公告)号:US07523266B2

    公开(公告)日:2009-04-21

    申请号:US11592836

    申请日:2006-11-03

    IPC分类号: G06F12/00

    摘要: One embodiment of the present invention provides a system that enforces memory reference ordering requirements, such as Total Store Ordering (TSO), at a Level 1 (L1) cache in a multiprocessor. During operation, while executing instructions in a speculative-execution mode, the system receives an invalidation signal for a cache line at the L1 cache wherein the invalidation signal is received from a cache-coherence system within the multiprocessor. In response to the invalidation signal, if the cache line exists in the L1 cache, the system examines a load-mark in the cache line, wherein the load-mark being set indicates that the cache line has been loaded from during speculative execution. If the load-mark is set, the system fails the speculative-execution mode and resumes a normal-execution mode from a checkpoint. By failing the speculative-execution mode, the system ensures that a potential update to the cache line indicated by the invalidation signal will not cause the memory reference ordering requirements to be violated during the speculative-execution mode.

    摘要翻译: 本发明的一个实施例提供了一种在多处理器中的级别1(L1)高速缓存上实施存储器参考排序要求(诸如总存储订购(TSO))的系统。 在操作期间,当以推测执行模式执行指令时,系统在L1高速缓存中接收用于高速缓存线的无效信号,其中从多处理器内的高速缓存相干系统接收到无效信号。 响应于无效信号,如果高速缓存行存在于L1高速缓存中,则系统检查高速缓存行中的加载标记,其中设置的加载标记指示在推测执行期间已经加载了高速缓存行。 如果设置了加载标记,则系统将失败推测执行模式,并从检查点恢复正常执行模式。 通过失败推测执行模式,系统确保由无效信号指示的高速缓存行的潜在更新不会导致在推测执行模式期间违反存储器引用排序要求。