Method and Cache Control Circuit for Replacing Cache Lines Using Alternate PLRU Algorithm and Victim Cache Coherency State
    1.
    发明申请
    Method and Cache Control Circuit for Replacing Cache Lines Using Alternate PLRU Algorithm and Victim Cache Coherency State 失效
    使用替代PLRU算法和受害者缓存一致性状态替换缓存行的方法和缓存控制电路

    公开(公告)号:US20090113134A1

    公开(公告)日:2009-04-30

    申请号:US11924163

    申请日:2007-10-25

    IPC分类号: G06F12/00

    CPC分类号: G06F12/122 G06F12/128

    摘要: A method and a cache control circuit for replacing a cache line using an alternate pseudo least-recently-used (PLRU) algorithm with a victim cache coherency state, and a design structure on which the subject cache control circuit resides are provided. When a requirement for replacement in a congruence class is identified, a first PLRU cache line for replacement and an alternate PLRU cache line for replacement in the congruence class are calculated. When the first PLRU cache line for replacement is in the victim cache coherency state, the alternate PLRU cache line is picked for use.

    摘要翻译: 一种用于使用具有受害者高速缓存一致性状态的替代伪最近最少使用(PLRU)算法来替换高速缓存行的方法和高速缓存控制电路,以及设置有对象高速缓存控制电路所在的设计结构。 当识别到同余类中的替换要求时,计算用于替换的第一PLRU高速缓存行和用于替换的替代PLRU高速缓存行。 当用于替换的第一PLRU高速缓存行处于受害者高速缓存一致性状态时,替代的PLRU高速缓存行被选择使用。

    Method and cache control circuit for replacing cache lines using alternate PLRU algorithm and victim cache coherency state
    2.
    发明授权
    Method and cache control circuit for replacing cache lines using alternate PLRU algorithm and victim cache coherency state 失效
    使用替代PLRU算法和受害者缓存一致性状态替换高速缓存行的方法和缓存控制电路

    公开(公告)号:US07917700B2

    公开(公告)日:2011-03-29

    申请号:US11924163

    申请日:2007-10-25

    IPC分类号: G06F12/12

    CPC分类号: G06F12/122 G06F12/128

    摘要: A method and a cache control circuit for replacing a cache line using an alternate pseudo least-recently-used (PLRU) algorithm with a victim cache coherency state, and a design structure on which the subject cache control circuit resides are provided. When a requirement for replacement in a congruence class is identified, a first PLRU cache line for replacement and an alternate PLRU cache line for replacement in the congruence class are calculated. When the first PLRU cache line for replacement is in the victim cache coherency state, the alternate PLRU cache line is picked for use.

    摘要翻译: 一种用于使用具有受害者高速缓存一致性状态的替代伪最近最少使用(PLRU)算法来替换高速缓存行的方法和高速缓存控制电路,以及设置有对象高速缓存控制电路所在的设计结构。 当识别到同余类中的替换要求时,计算用于替换的第一PLRU高速缓存行和用于替换的替代PLRU高速缓存行。 当用于替换的第一PLRU高速缓存行处于受害者高速缓存一致性状态时,替代的PLRU高速缓存行被选择使用。

    Method and apparatus for tracking command order dependencies
    3.
    发明授权
    Method and apparatus for tracking command order dependencies 失效
    跟踪命令顺序相关性的方法和装置

    公开(公告)号:US07634591B2

    公开(公告)日:2009-12-15

    申请号:US11340736

    申请日:2006-01-26

    IPC分类号: G06F3/00 G06F9/30

    CPC分类号: G06F9/3838

    摘要: Methods and apparatus for tracking dependencies of commands to be executed by a command processor are provided. By determining the dependency of incoming commands against all commands awaiting execution, dependency information can be stored in a dependency scoreboard. Such a dependency scoreboard may be used to determine if a command is ready to be issued by the command processor. The dependency scoreboard can also be updated with information relating to the issuance of commands, for example, as commands complete.

    摘要翻译: 提供了用于跟踪由命令处理器执行的命令的依赖性的方法和装置。 通过确定输入命令对所有等待执行的命令的依赖性,依赖性信息可以存储在依赖记分板中。 可以使用这种依赖记分板来确定命令处理器是否准备好发出命令。 还可以使用与命令发布相关的信息来更新依赖记分板,例如,完成命令。

    Handling concurrent address translation cache misses and hits under those misses while maintaining command order
    4.
    发明授权
    Handling concurrent address translation cache misses and hits under those misses while maintaining command order 失效
    在维护命令顺序的同时,处理并发地址转换高速缓存未命中和命中

    公开(公告)号:US07539840B2

    公开(公告)日:2009-05-26

    申请号:US11420884

    申请日:2006-05-30

    IPC分类号: G06F12/00

    CPC分类号: G06F12/1027

    摘要: A method handles concurrent address translation cache misses and hits under those misses while maintaining command order based upon virtual channel. Commands are stored in a command processing unit that maintains ordering of the commands. A command buffer index is assigned to each address being sent from the command processing unit to an address translation unit. When an address translation cache miss occurs, a memory fetch request is sent. The CBI is passed back to the command processing unit with a signal to indicate that the fetch request has completed. The command processing unit uses the CBI to locate the command and address to be reissued to the address translation unit.

    摘要翻译: 一种方法处理并发地址转换高速缓存未命中并在这些未命中的命中,同时维持基于虚拟通道的命令顺序。 命令存储在维护命令排序的命令处理单元中。 将命令缓冲器索引分配给从命令处理单元发送到地址转换单元的每个地址。 当发生地址转换高速缓存未命中时,发送存储器提取请求。 CBI被传回给具有信号的命令处理单元,以指示提取请求已经完成。 命令处理单元使用CBI将要重新发布的命令和地址定位到地址转换单元。

    Methods and Apparatus for Issuing Commands on a Bus
    5.
    发明申请
    Methods and Apparatus for Issuing Commands on a Bus 审中-公开
    在公共汽车上发出命令的方法和装置

    公开(公告)号:US20080189501A1

    公开(公告)日:2008-08-07

    申请号:US11671117

    申请日:2007-02-05

    IPC分类号: G06F12/00

    CPC分类号: G06F13/1631

    摘要: In a first aspect, a first method of issuing a command on a bus of a system is provided. The first method includes the steps of (1) receiving a first functional memory command in the system; (2) receiving a command to force the system to execute functional memory commands in order; (3) receiving a second functional memory command in the system; and (4) employing a dependency matrix to indicate the second functional memory command requires access to a same address as the first functional memory command whether or not the second functional memory command actually has an ordering dependency on the first functional memory command. The dependency matrix is adapted to store data indicating whether a functional memory command received by the system has an ordering dependency on one or more functional memory commands previously received by the system. Numerous other aspects are provided.

    摘要翻译: 在第一方面,提供了一种在系统总线上发出命令的方法。 第一种方法包括以下步骤:(1)在系统中接收第一功能存储器命令; (2)接收强制系统依次执行功能存储器命令的命令; (3)在系统中接收第二功能存储器命令; 和(4)使用依赖矩阵来指示第二功能存储器命令需要访问与第一功能存储器命令相同的地址,无论第二功能存储器命令是否实际上具有对第一功能存储器命令的排序依赖性。 依赖矩阵适于存储指示系统接收的功能存储器命令是否具有与先前由系统接收的一个或多个功能存储器命令的排序依赖关系的数据。 提供了许多其他方面。

    Methods and Apparatus for Combining Commands Prior to Issuing the Commands on a Bus
    6.
    发明申请
    Methods and Apparatus for Combining Commands Prior to Issuing the Commands on a Bus 审中-公开
    在总线上发布命令之前组合命令的方法和装置

    公开(公告)号:US20080126641A1

    公开(公告)日:2008-05-29

    申请号:US11468889

    申请日:2006-08-31

    IPC分类号: G06F13/00

    CPC分类号: G06F13/1631

    摘要: In a first aspect, a first method of issuing a command on a bus is provided. The first method includes the steps of (1) receiving a first command associated with a first address; (2) delaying the issue of the first command on the bus for a time period; (3) if a second command associated with a second address contiguous with the first address is not received before the time period elapses, issuing the first command on the bus after the time period elapses; and (4) if the second command associated with the second address contiguous with the first address is received before the first command is issued on the bus, combining the first and second commands into a combined command associated with the first address. Numerous other aspects are provided.

    摘要翻译: 在第一方面,提供了一种在总线上发出命令的方法。 第一种方法包括以下步骤:(1)接收与第一地址相关联的第一命令; (二)延迟公交一期时间的问题; (3)如果在经过时间段之前没有接收到与第一地址相邻的第二地址相关联的第二命令,则在经过该时间段之后在总线上发出第一命令; 和(4)如果在总线上发出第一命令之前接收到与第一地址相邻的第二地址相关联的第二命令,则将第一和第二命令组合成与第一地址相关联的组合命令。 提供了许多其他方面。

    Methods and systems with delayed execution of multiple processors
    7.
    发明授权
    Methods and systems with delayed execution of multiple processors 有权
    延迟执行多个处理器的方法和系统

    公开(公告)号:US09146835B2

    公开(公告)日:2015-09-29

    申请号:US13343809

    申请日:2012-01-05

    IPC分类号: G06F11/36 G06F11/16

    摘要: A first first-in-first-out (FIFO) memory may receive first processor input from a first processor group that includes a first processor. The first processor group is configured to execute program code based on the first processor input that includes a set of input signals, a clock signal, and corresponding data. The first FIFO may store the first processor input and may output the first processor input to a second FIFO memory and to a second processor according to a first delay. The second FIFO memory may store the first processor input and may output the first processor input to a third processor according to a second delay. The second processor may execute at least a first portion of the program code and the third processor may execute at least a second portion of the program code responsive to the first processor input.

    摘要翻译: 第一先入先出(FIFO)存储器可以从包括第一处理器的第一处理器组接收第一处理器输入。 第一处理器组被配置为基于包括一组输入信号,时钟信号和相应数据的第一处理器输入来执行程序代码。 第一FIFO可以存储第一处理器输入,并且可以根据第一延迟将第一处理器输入输出到第二FIFO存储器和第二处理器。 第二FIFO存储器可以存储第一处理器输入,并且可以根据第二延迟将第一处理器输入输出到第三处理器。 第二处理器可以执行程序代码的至少第一部分,并且第三处理器可以响应于第一处理器输入来执行程序代码的至少第二部分。

    MULTIPLE PROCESSOR DELAYED EXECUTION
    8.
    发明申请
    MULTIPLE PROCESSOR DELAYED EXECUTION 有权
    多处理器延迟执行

    公开(公告)号:US20130179720A1

    公开(公告)日:2013-07-11

    申请号:US13343809

    申请日:2012-01-05

    IPC分类号: G06F1/12

    摘要: A first first-in-first-out (FIFO) memory may receive first processor input from a first processor group that includes a first processor. The first processor group is configured to execute program code based on the first processor input that includes a set of input signals, a clock signal, and corresponding data. The first FIFO may store the first processor input and may output the first processor input to a second FIFO memory and to a second processor according to a first delay. The second FIFO memory may store the first processor input and may output the first processor input to a third processor according to a second delay. The second processor may execute at least a first portion of the program code and the third processor may execute at least a second portion of the program code responsive to the first processor input.

    摘要翻译: 第一先入先出(FIFO)存储器可以从包括第一处理器的第一处理器组接收第一处理器输入。 第一处理器组被配置为基于包括一组输入信号,时钟信号和相应数据的第一处理器输入来执行程序代码。 第一FIFO可以存储第一处理器输入,并且可以根据第一延迟将第一处理器输入输出到第二FIFO存储器和第二处理器。 第二FIFO存储器可以存储第一处理器输入,并且可以根据第二延迟将第一处理器输入输出到第三处理器。 第二处理器可以执行程序代码的至少第一部分,并且第三处理器可以响应于第一处理器输入来执行程序代码的至少第二部分。

    Pipelined hardware implementation of a neural network circuit
    9.
    发明授权
    Pipelined hardware implementation of a neural network circuit 失效
    流水线硬件实现神经网络电路

    公开(公告)号:US06836767B2

    公开(公告)日:2004-12-28

    申请号:US09970002

    申请日:2001-10-03

    申请人: Chad B. McBride

    发明人: Chad B. McBride

    IPC分类号: G06F1518

    CPC分类号: G06N3/063

    摘要: In a first aspect, a pipelined hardware implementation of a neural network circuit includes an input stage, two or more processing stages and an output stage. Each processing stage includes one or more processing units. Each processing unit includes storage for weighted values, a plurality of multipliers for multiplying input values by weighted values, an adder for adding products outputted from product multipliers, a function circuit for applying a non-linear function to the sum outputted by the adder, and a register for storing the output of the function circuit.

    摘要翻译: 在第一方面,神经网络电路的流水线硬件实现包括输入级,两个或多个处理级和输出级。 每个处理阶段包括一个或多个处理单元。 每个处理单元包括用于加权值的存储,用于将输入值乘以加权值的多个乘法器,用于将从乘积乘法器输出的乘积相加的加法器,用于对加法器输出的和应用非线性函数的函数电路,以及 用于存储功能电路的输出的寄存器。

    Heuristic clustering of circuit elements in a circuit design
    10.
    发明授权
    Heuristic clustering of circuit elements in a circuit design 有权
    电路设计中电路元件的启发式聚类

    公开(公告)号:US08196074B2

    公开(公告)日:2012-06-05

    申请号:US12406439

    申请日:2009-03-18

    IPC分类号: G06F17/50

    CPC分类号: G06F17/5072

    摘要: An apparatus, program product and method utilize heuristic clustering to generate assignments of circuit elements to clusters or groups to optimize a desired spatial locality metric. For example, circuit elements such as scan-enabled latches may be assigned to individual scan chains using heuristic clustering to optimize the layout of the scan chains in a scan architecture for a circuit design.

    摘要翻译: 一种装置,程序产品和方法利用启发式聚类来生成电路元件对簇或组的分配,以优化所需的空间位置度量。 例如,可以使用启发式聚类将电路元件(例如启用扫描的锁存器)分配给单独的扫描链,以优化用于电路设计的扫描架构中的扫描链的布局。