Methods of creating a dictionary for data compression
    1.
    发明授权
    Methods of creating a dictionary for data compression 有权
    创建数据压缩字典的方法

    公开(公告)号:US08037034B2

    公开(公告)日:2011-10-11

    申请号:US11781833

    申请日:2007-07-23

    IPC分类号: G06F7/00 G06F17/00

    CPC分类号: H03M7/3088

    摘要: Some aspects of the invention provide methods, systems, and computer program products for creating a static dictionary in which longer byte-strings are preferred. To that end, in accordance with aspects of the present invention, a new heuristic is defined to replace the aforementioned frequency count metric used to record the number of times a particular node in a data tree is visited. The new heuristic is based on counting the number of times an end-node of a particular byte-string is visited, while not incrementing a count for nodes storing characters in the middle of the byte-string as often as each time such nodes are visited. The result is an occurrence count metric that favors longer byte-strings, by being biased towards not incrementing the respective occurrence count values for nodes storing characters in the middle of a byte-string.

    摘要翻译: 本发明的一些方面提供用于创建静态词典的方法,系统和计算机程序产品,其中优选较长的字节串。 为此,根据本发明的方面,定义新的启发式来代替用于记录数据树中的特定节点被访问次数的上述频率计数度量。 新的启发式是基于对特定字节串的端节点进行访问的次数进行计数,而不会在每次访问这些节点时频繁地在字节串中间存储字符的节点递增计数 。 结果是有利于较长字节串的发生计数度量,偏向于不增加在字节串中间存储字符的节点的相应出现计数值。

    Methods of creating a dictionary for data compression
    2.
    发明授权
    Methods of creating a dictionary for data compression 失效
    创建数据压缩字典的方法

    公开(公告)号:US07283072B1

    公开(公告)日:2007-10-16

    申请号:US11278118

    申请日:2006-03-30

    IPC分类号: H03M7/00

    CPC分类号: H03M7/3088

    摘要: Some aspects of the invention provide methods, systems, and computer program products for creating a static dictionary in which longer byte-strings are preferred. To that end, in accordance with aspects of the present invention, a new heuristic is defined to replace the aforementioned frequency count metric used to record the number of times a particular node in a data tree is visited. The new heuristic is based on counting the number of times an end-node of a particular byte-string is visited, while not incrementing a count for nodes storing characters in the middle of the byte-string as often as each time such nodes are visited. The result is an occurrence count metric that favours longer byte-strings, by being biased towards not incrementing the respective occurrence count values for nodes storing characters in the middle of a byte-string.

    摘要翻译: 本发明的一些方面提供用于创建静态词典的方法,系统和计算机程序产品,其中优选较长的字节串。 为此,根据本发明的方面,定义新的启发式来代替用于记录数据树中的特定节点被访问次数的上述频率计数度量。 新的启发式是基于对特定字节串的端节点进行访问的次数进行计数,而不会在每次访问这些节点时频繁地在字节串中间存储字符的节点递增计数 。 结果是有利于较长字节串的发生计数度量,偏向于不增加在字节串中间存储字符的节点的相应出现计数值。

    Method and apparatus for improved recovery of processor state using
history buffer
    3.
    发明授权
    Method and apparatus for improved recovery of processor state using history buffer 失效
    使用历史缓冲区来改善处理器状态恢复的方法和装置

    公开(公告)号:US5860014A

    公开(公告)日:1999-01-12

    申请号:US729307

    申请日:1996-10-15

    IPC分类号: G06F9/38 G06F9/46

    CPC分类号: G06F9/3861

    摘要: A method and apparatus for maintaining content of registers of a processor which uses the registers for processing instructions. Entries are stored in a buffer for restoring register content in response to an interruption by an interruptible instruction. Entries include information for reducing the number of entries selected for the restoring. A set of the buffer entries is selected, in response to the interruption and the information, for restoring register content. The set includes only entries which are necessary for restoring the content in response to the interruption so that the content of the processor registers may be restored in a single processor cycle, even if multiple entries are stored for a first one of the registers and multiple entries are stored for a second one of the registers.

    摘要翻译: 一种用于维护使用寄存器处理指令的处理器的寄存器的内容的方法和装置。 条目存储在缓冲器中,用于通过可中断指令中断来恢复寄存器内容。 条目包括用于减少为恢复选择的条目数量的信息。 响应于中断和信息来选择一组缓冲器条目用于恢复寄存器内容。 该集合仅包括为了响应于中断而恢复内容所必需的条目,使得处理器寄存器的内容可以在单个处理器周期中被恢复,即使对于第一个寄存器和多个条目存储了多个条目 存储在第二个寄存器中。

    Method and system for reduced run-time delay during conditional branch
execution in pipelined processor systems utilizing selectively delayed
sequential instruction purging
    4.
    发明授权
    Method and system for reduced run-time delay during conditional branch execution in pipelined processor systems utilizing selectively delayed sequential instruction purging 失效
    用于利用选择性延迟顺序指令清除在流水线处理器系统中的条件分支执行期间减少运行时间延迟的方法和系统

    公开(公告)号:US5784604A

    公开(公告)日:1998-07-21

    申请号:US959183

    申请日:1992-10-09

    IPC分类号: G06F9/38 G06F9/00

    CPC分类号: G06F9/3804

    摘要: A method and system are disclosed for reducing run-time delay during conditional branch instruction execution in a pipelined processor system. A series of queued sequential instructions and conditional branch instructions are processed wherein each conditional branch instruction specifies an associated conditional branch to be taken in response to a selected outcome of processing one or more sequential instructions. Upon detection of a conditional branch instruction within the queue, a group of target instructions are fetched based upon a prediction that an associated conditional branch will be taken. Sequential instructions within the queue following the conditional branch instruction are then purged and the target instructions loaded into the queue only in response to a successful a retrieval of the target instructions, such that the sequential instructions may be processed without delay if the prediction that the conditional branch is taken proves invalid prior to retrieval of the target instructions. Alternately, the purged sequential instructions may be refetched after loading the target instructions such that the sequential instructions may be executed with minimal delay if the prediction that the conditional branch is taken proves invalid after loading the target instructions. In yet another embodiment, the sequential instructions within the queue following the conditional branch instruction are purged only in response to a successful retrieval of the target instructions and an imminent execution of the conditional branch instruction.

    摘要翻译: 公开了一种用于在流水线处理器系统中的条件分支指令执行期间减少运行时间延迟的方法和系统。 处理一系列排队的顺序指令和条件分支指令,其中每个条件分支指令响应于处理一个或多个顺序指令的所选结果来指定要采取的相关联的条件分支。 在检测到队列内的条件分支指令之后,基于将采用相关联的条件分支的预测来取得一组目标指令。 随后条件分支指令之后的队列中的顺序指令被清除,并且目标指令仅仅响应于目标指令的成功检索而被加载到队列中,使得如果预测条件 在检索目标指令之前,分支被认为是无效的。 或者,可以在加载目标指令之后重新抽取清除的顺序指令,使得如果在加载目标指令之后条件分支的预测被证明是无效的,则可以以最小延迟执行顺序指令。 在另一个实施例中,仅在响应于目标指令的成功检索和条件分支指令的即将执行之后才清除在条件分支指令之后的队列内的顺序指令。

    DEMAND BASED PARTITIONING OR MICROPROCESSOR CACHES
    5.
    发明申请
    DEMAND BASED PARTITIONING OR MICROPROCESSOR CACHES 失效
    基于需求的分区或微处理器缓存

    公开(公告)号:US20100287339A1

    公开(公告)日:2010-11-11

    申请号:US12437624

    申请日:2009-05-08

    IPC分类号: G06F12/08 G06F12/00

    摘要: Associativity of a multi-core processor cache memory to a logical partition is managed and controlled by receiving a plurality of unique logical processing partition identifiers into registration of a multi-core processor, each identifier being associated with a logical processing partition on one or more cores of the multi-core processor; responsive to a shared cache memory miss, identifying a position in a cache directory for data associated with the address, the shared cache memory being multi-way set associative; associating a new cache line entry with the data and one of the registered unique logical processing partition identifiers; modifying the cache directory to reflect the association; and caching the data at the new cache line entry, wherein said shared cache memory is effectively shared on a line-by-line basis among said plurality of logical processing partitions of said multi-core processor.

    摘要翻译: 通过将多个唯一的逻辑处理分区标识符接收到多核处理器的注册来管理和控制多核处理器高速缓冲存储器与逻辑分区的关联性,每个标识符与一个或多个核上的逻辑处理分区相关联 的多核处理器; 响应于共享的高速缓存存储器未命中,识别高速缓存目录中与所述地址相关联的数据的位置,所述共享高速缓存存储器是多路组合的; 将新的高速缓存行条目与数据和所注册的唯一逻辑处理分区标识符之一相关联; 修改缓存目录以反映关联; 以及在所述新的高速缓存行条目处高速缓存所述数据,其中所述共享高速缓冲存储器在所述多核处理器的所述多个逻辑处理分区之间逐行地有效地共享。

    Dynamic expansion of execution pipeline stages
    6.
    发明授权
    Dynamic expansion of execution pipeline stages 失效
    执行流水线阶段的动态扩展

    公开(公告)号:US6079002A

    公开(公告)日:2000-06-20

    申请号:US935573

    申请日:1997-09-23

    IPC分类号: G06F9/38 G06F12/00

    CPC分类号: G06F9/3867 G06F9/3824

    摘要: A method and system in a data processing system for accessing information using an instruction specifying a memory address is disclosed. The method and system comprises issuing the instruction to an execution unit and storing an address derived from the specified address. The method and system also includes accessing a cache to obtain the information, using the derived address and determining, in response to a signal indicating that there has been a cache miss, if there is a location available to store the specified address in a queue. According to the system and method disclosed herein, the present invention allows for dynamic pipeline expansion of a processor without splitting this function between components depending upon the reason expansion was required, thereby increasing overall system performance.

    摘要翻译: 公开了一种使用指定存储器地址的指令访问信息的数据处理系统中的方法和系统。 该方法和系统包括向执行单元发出指令并存储从指定地址导出的地址。 该方法和系统还包括访问高速缓存以获得信息,使用导出的地址并且响应于指示已经存在高速缓存未命中的信号确定是否存在可用于将指定的地址存储在队列中的位置。 根据本文公开的系统和方法,根据需要扩展的原因,本发明允许处理器的动态管道扩展,而不会在组件之间分离该功能,从而提高整体系统性能。

    Demand based partitioning of microprocessor caches
    7.
    发明授权
    Demand based partitioning of microprocessor caches 失效
    微处理器缓存的基于需求的划分

    公开(公告)号:US08458401B2

    公开(公告)日:2013-06-04

    申请号:US13398443

    申请日:2012-02-16

    IPC分类号: G06F12/08

    摘要: Associativity of a multi-core processor cache memory to a logical partition is managed and controlled by receiving a plurality of unique logical processing partition identifiers into registration of a multi-core processor, each identifier being associated with a logical processing partition on one or more cores of the multi-core processor; responsive to a shared cache memory miss, identifying a position in a cache directory for data associated with the address, the shared cache memory being multi-way set associative; associating a new cache line entry with the data and one of the registered unique logical processing partition identifiers; modifying the cache directory to reflect the association; and caching the data at the new cache line entry, wherein the shared cache memory is effectively shared on a line-by-line basis among the plurality of logical processing partitions of the multi-core processor.

    摘要翻译: 通过将多个唯一的逻辑处理分区标识符接收到多核处理器的注册来管理和控制多核处理器高速缓冲存储器与逻辑分区的关联性,每个标识符与一个或多个核上的逻辑处理分区相关联 的多核处理器; 响应于共享的高速缓存存储器未命中,识别高速缓存目录中与所述地址相关联的数据的位置,所述共享高速缓存存储器是多路组合的; 将新的高速缓存行条目与数据和所注册的唯一逻辑处理分区标识符之一相关联; 修改缓存目录以反映关联; 以及将所述数据缓存在所述新的高速缓存行条目上,其中所述共享高速缓冲存储器在所述多核处理器的所述多个逻辑处理分区之间逐行地有效地共享。

    Demand based partitioning of microprocessor caches
    8.
    发明授权
    Demand based partitioning of microprocessor caches 失效
    微处理器缓存的基于需求的划分

    公开(公告)号:US08447929B2

    公开(公告)日:2013-05-21

    申请号:US13398292

    申请日:2012-02-16

    IPC分类号: G06F15/16

    摘要: Associativity of a multi-core processor cache memory to a logical partition is managed and controlled by receiving a plurality of unique logical processing partition identifiers into registration of a multi-core processor, each identifier being associated with a logical processing partition on one or more cores of the multi-core processor; responsive to a shared cache memory miss, identifying a position in a cache directory for data associated with the address, the shared cache memory being multi-way set associative; associating a new cache line entry with the data and one of the registered unique logical processing partition identifiers; modifying the cache directory to reflect the association; and caching the data at the new cache line entry, wherein said shared cache memory is effectively shared on a line-by-line basis among said plurality of logical processing partitions of said multi-core processor.

    摘要翻译: 通过将多个唯一的逻辑处理分区标识符接收到多核处理器的注册来管理和控制多核处理器高速缓冲存储器与逻辑分区的关联性,每个标识符与一个或多个核上的逻辑处理分区相关联 的多核处理器; 响应于共享的高速缓存存储器未命中,识别高速缓存目录中与所述地址相关联的数据的位置,所述共享高速缓存存储器是多路组合的; 将新的高速缓存行条目与数据和所注册的唯一逻辑处理分区标识符之一相关联; 修改缓存目录以反映关联; 以及在所述新的高速缓存行条目处高速缓存所述数据,其中所述共享高速缓冲存储器在所述多核处理器的所述多个逻辑处理分区之间逐行地有效地共享。

    Use of software hint for branch prediction in the absence of hint bit in the branch instruction
    9.
    发明授权
    Use of software hint for branch prediction in the absence of hint bit in the branch instruction 失效
    在分支指令中没有提示位的情况下,使用软件提示进行分支预测

    公开(公告)号:US06971000B1

    公开(公告)日:2005-11-29

    申请号:US09548469

    申请日:2000-04-13

    IPC分类号: G06F9/38 G06F9/44 G06F9/45

    CPC分类号: G06F9/3846 G06F8/445

    摘要: In a processor, when a conditional branch instruction is encountered, a software prediction for the conditional branch is made as a function of the specific condition register field used to store the branch condition for the conditional branch instruction. If a specified condition register field is not used, the software prediction may be made dependent upon the specific address at which the branch instruction is located.

    摘要翻译: 在处理器中,当遇到条件分支指令时,根据用于存储条件转移指令的分支条件的特定条件寄存器字段,进行条件转移的软件预测。 如果不使用指定的条件寄存器字段,则软件预测可以取决于分支指令所在的特定地址。

    Mechanism to reduce instruction cache miss penalties and methods therefor
    10.
    发明授权
    Mechanism to reduce instruction cache miss penalties and methods therefor 失效
    降低指令高速缓存的机制错误惩罚及其方法

    公开(公告)号:US06658534B1

    公开(公告)日:2003-12-02

    申请号:US09052247

    申请日:1998-03-31

    IPC分类号: G06F1200

    摘要: The mechanism to reduce instruction cache miss penalties by initiating an early cache line prefetch is implemented. The mechanism provides for an early prefetch of a next succeeding cache line before an instruction cache miss is detected during a fetch which causes an instruction cache miss. The prefetch is initiated when it is guaranteed that instructions in the subsequent cache line will be referenced. This occurs when the current instruction is either a non-branch instruction, so instructions will execute sequentially, or if the current instruction is a branch instruction, but the branch forward is sufficiently short. If the current instruction is a branch, but the branch forward is to the next sequential cache line, a prefetch of the next sequential cache line may be performed. In this way, cache miss latencies may be reduced without generating cache pollution due to the prefetch of cache lines which are subsequently unreferenced.

    摘要翻译: 实现了通过启动早期高速缓存行预取来减少指令高速缓存未达错误的机制。 该机制在提取期间检测到指令高速缓存未命中导致指令高速缓存未命中之前提供对下一个后续高速缓存行的早期预取。 当保证将引用后续高速缓存行中的指令时,启动预取。 当当前指令是非分支指令时,会发生这种情况,因此指令将顺序执行,或者当前指令是分支指令,但分支前进足够短。 如果当前指令是分支,而分支转发到下一个顺序高速缓存行,则可以执行下一个顺序高速缓存行的预取。 以这种方式,可以减少高速缓存未命中延迟,而不会由于先前未被引用的高速缓存线的预取而产生高速缓存污染。