Gateword acquisition in a multiprocessor write-into-cache environment
    1.
    发明授权
    Gateword acquisition in a multiprocessor write-into-cache environment 有权
    在多处理器写入缓存环境中的门字获取

    公开(公告)号:US06760811B2

    公开(公告)日:2004-07-06

    申请号:US10219644

    申请日:2002-08-15

    IPC分类号: G06F1200

    CPC分类号: G06F12/0811 G06F12/084

    摘要: In a multiprocessor data processing system including: a memory, first and second shared caches, a system bus coupling the memory and the shared caches, first, second, third and fourth processors having, respectively, first, second, third and fourth private caches with the first and second private caches being coupled to the first shared cache, and the third and fourth private caches being coupled to the second shared cache, gateword hogging is prevented by providing a gate control flag in each processor. Priority is established for a processor to next acquire ownership of the gate control word by: broadcasting a “set gate control flag” command to all processors such that setting the gate control flags establishes delays during which ownership of the gate control word will not be requested by another processor for predetermined periods established in each processor. Optionally, the processor so acquiring ownership broadcasts a “reset gate control flag” command to all processors when it has acquired ownership of the gate control word.

    摘要翻译: 一种多处理器数据处理系统,包括:存储器,第一和第二共享高速缓存,耦合存储器和共享高速缓存的系统总线,第一,第二,第三和第四处理器分别具有第一,第二,第三和第四专用高速缓存, 所述第一和第二专用高速缓冲存储器耦合到所述第一共享高速缓存,并且所述第三和第四专用高速缓冲存储器耦合到所述第二共享高速缓冲存储器,通过在每个处理器中提供门控制标志来防止门字锁定。 为处理器建立下一个优先级,即:通过以下方式获得门控制字的所有权:向所有处理器广播“设置门控制标志”命令,使得设置门控制标志建立延迟,在该延迟期间不会请求门控制字的所有权 由另一处理器在每个处理器中建立预定时段。 可选地,获得所有权的处理器在获得了门控制字的所有权时向所有处理器广播“复位门控制标志”命令。

    Balanced access to prevent gateword dominance in a multiprocessor write-into-cache environment
    2.
    发明授权
    Balanced access to prevent gateword dominance in a multiprocessor write-into-cache environment 失效
    平衡访问以防止在多处理器写入缓存环境中的门字优势

    公开(公告)号:US06868483B2

    公开(公告)日:2005-03-15

    申请号:US10256289

    申请日:2002-09-26

    IPC分类号: G06F9/46 G06F12/00 G06F12/08

    CPC分类号: G06F9/52 G06F12/0815

    摘要: In a multiprocessor data processing system including: a main memory; at least first and second shared caches; a system bus coupling the main memory and the first and second shared caches; at least four processors having respective private caches with the first and second private caches being coupled to the first shared cache and to one another via a first internal bus, and the third and fourth private caches being coupled to the second shared cache and to one another via a second internal bus; method and apparatus for preventing hogging of ownership of a gateword stored in the main memory and which governs access to common code/data shared by processes running in at least three of the processors. Each processor includes a gate control flag. A gateword CLOSE command, establishes ownership of the gateword in one processor and prevents other processors from accessing the code/data guarded until the one processor has completed its use. A gateword OPEN command then broadcasts a gateword interrupt to set the flag in each processor, delays long enough to ensure that the flags have all been set, writes an OPEN value into the gateword and flushes the gateword to main memory. A gateword access command executed by a requesting processor checks its gate control flag, and if set, starts a fixed time delay after which normal execution continues.

    摘要翻译: 一种多处理器数据处理系统,包括:主存储器; 至少第一和第二共享高速缓存; 耦合主存储器和第一和第二共享高速缓存的系统总线; 具有相应私有高速缓存的至少四个处理器具有第一和第二专用高速缓存,其经由第一内部总线耦合到第一共享高速缓存并且彼此耦合,并且第三和第四专用高速缓存耦合到第二共享高速缓存并且彼此耦合 通过第二条内部总线; 用于防止存储在主存储器中的门词的所有权陷入的方法和装置,并且其控制对在至少三个处理器中运行的进程共享的公共代码/数据的访问。 每个处理器包括一个门控制标志。 门字关闭命令,确定一个处理器中的门字的所有权,并防止其他处理器访问代码/数据,直到一个处理器完成使用。 门字OPEN命令然后广播门字中断以在每个处理器中设置标志,延迟足够长的时间以确保标志已经被设置,将OPEN值写入门字并将门字刷新到主存储器。 由请求处理器执行的门字访问命令检查其门控制标志,并且如果被设置,则启动固定的时间延迟,之后继续正常执行。

    Multiprocessor write-into-cache system incorporating efficient access to a plurality of gatewords
    3.
    发明授权
    Multiprocessor write-into-cache system incorporating efficient access to a plurality of gatewords 有权
    多处理器写入缓存系统,包括对多个门词的高效访问

    公开(公告)号:US06973539B2

    公开(公告)日:2005-12-06

    申请号:US10426409

    申请日:2003-04-30

    IPC分类号: G06F12/08 G06F12/00

    CPC分类号: G06F12/0811

    摘要: A multiprocessor write-into-cache data processing system includes a feature for preventing hogging of ownership of a first gateword stored in the memory which governs access to a first common code/data set shared by processes running in the processors by imposing first delays on all other processors in the system while, at the same time, mitigating any adverse effect on performance of processors attempting to access a gateword other than the first gateword. This is achieved by starting a second delay in any processor which is seeking ownership of a gateword other than the first gateword and truncating the first delay in all such processors by subtracting the elapsed time indicated by the second delay from the elapsed time indicated by the first delay.

    摘要翻译: 多处理器写入高速缓存数据处理系统包括用于防止存储在存储器中的第一门词的所有权的特征,其控制对通过在所有处理器中运行的进程共享的第一公共代码/数据集的访问 系统中的其他处理器,同时减轻了尝试访问第一个闸门以外的闸门的处理器性能的任何不利影响。 这通过在寻求除第一门词之外的门词的所有权的任何处理器中开始第二延迟来实现,并且通过从第一延迟指示的经过时间中减去由第二延迟指示的经过时间来截断所有这些处理器中的第一延迟 延迟。

    Equal access to prevent gateword dominance in a multiprocessor write-into-cache environment
    4.
    发明授权
    Equal access to prevent gateword dominance in a multiprocessor write-into-cache environment 有权
    在多处理器写入高速缓存环境中等同的访问来防止门字优势

    公开(公告)号:US06970977B2

    公开(公告)日:2005-11-29

    申请号:US10403703

    申请日:2003-03-31

    IPC分类号: G06F12/08 G06F12/00

    CPC分类号: G06F12/084

    摘要: In a multiprocessor write-into-cache data processing system including: a memory; at least first and second shared caches; a system bus coupling the memory and the shared caches; at least one processor having a private cache coupled, respectively, to each shared cache; method and apparatus for preventing hogging of ownership of a gateword stored in the memory which governs access to common code/data shared by processes running in the processors by which a read copy of the gateword is obtained by a given processor by performing successive swap operations between the memory and the given processor's shared cache, and the given processor's shared cache and private cache. If the gateword is found to be OPEN, it is CLOSEd by the given processor, and successive swap operations are performed between the given processor's private cache and shared cache and shared cache and memory to write the gateword CLOSEd in memory such that the given processor obtains exclusive access to the governed common code/data. When the given processor completes use of the common code/data, it writes the gateword OPEN in its private cache, and successive swap operations are performed between the given processor's private cache and shared cache and shared cache and memory to write the gateword OPEN in memory.

    摘要翻译: 一种多处理器写入高速缓存数据处理系统,包括:存储器; 至少第一和第二共享高速缓存; 耦合存储器和共享缓存的系统总线; 至少一个处理器具有分别耦合到每个共享高速缓存的专用高速缓存; 方法和装置,用于防止存储在存储器中的门字的所有权,其控制对在处理器中运行的进程共享的共同代码/数据的访问,通过该处理器,由给定处理器通过执行连续的交换操作来获得门字的读取副本 内存和给定处理器的共享缓存,以及给定的处理器的共享缓存和专用缓存。 如果门字被发现是OPEN,则由给定的处理器关闭,并且在给定处理器的专用高速缓存和共享高速缓存之间执行连续的交换操作,并且共享高速缓存和存储器将门字CLOSEd写入存储器,使得给定的处理器获得 独占访问受管制的通用代码/数据。 当给定的处理器完成使用通用代码/数据时,它将门字OPEN写入其专用缓存,并且在给定处理器的专用高速缓存和共享高速缓存之间执行连续的交换操作,共享高速缓存和存储器将门槛OPEN写入存储器 。

    Method and system for cache miss prediction based on previous cache
access requests
    5.
    发明授权
    Method and system for cache miss prediction based on previous cache access requests 失效
    基于先前缓存访问请求的高速缓存未命中预测方法和系统

    公开(公告)号:US5495591A

    公开(公告)日:1996-02-27

    申请号:US906618

    申请日:1992-06-30

    申请人: Charles P. Ryan

    发明人: Charles P. Ryan

    IPC分类号: G06F12/08

    摘要: For a data processing system which employs a cache memory, the disclosure includes both a method for lowering the cache miss ratio for requested operands and an example of special purpose apparatus for practicing the method. Recent cache misses are stored in a first in, first out miss stack, and the stored addresses are searched for displacement patterns thereamong. Any detected pattern is then employed to predict a succeeding cache miss by prefetching from main memory the signal identified by the predictive address. The apparatus for performing this task is preferably hard wired for speed purposes and includes subtraction circuits for evaluating variously displaced addresses in the miss stack and comparator circuits for determining if the outputs from at least two subtraction circuits are the same, indicating a pattern which yields information which can be combined with an address in the stack to develop a predictive address. The efficiency of the apparatus is improved by placing a series of "select pattern" values representing the search order for trying patterns into a register stack and providing logic circuitry by which the most recently found "select pattern" value is placed at the top of the stack with the remaining "select pattern" values pushed down accordingly.

    摘要翻译: 对于采用高速缓冲存储器的数据处理系统,本公开包括用于降低所请求的操作数的高速缓存未命中率的方法和用于实施该方法的专用设备的示例。 最近的高速缓存未命中被存储在先入先出的堆栈中,并且存储的地址被搜索到位移模式。 然后,通过从主存储器预取由预测地址识别的信号,随后采用任何检测到的模式来预测随后的高速缓存未命中。 用于执行该任务的装置优选地是为了速度目的而进行硬连接的,并且包括用于评估未命令堆栈中的各种移位地址的减法电路和用于确定来自至少两个减法电路的输出是否相同的比较器电路,指示产生信息的模式 其可以与栈中的地址组合以开发预测地址。 通过将表示用于尝试图案的搜索顺序的一系列“选择图案”值放置到寄存器堆栈中并提供逻辑电路来改善装置的效率,通过该逻辑电路将最近发现的“选择图案”值放置在 堆栈与剩余的“选择模式”值相应地下推。

    Cache unit information replacement apparatus
    6.
    发明授权
    Cache unit information replacement apparatus 失效
    缓存单元信息更换装置

    公开(公告)号:US4314331A

    公开(公告)日:1982-02-02

    申请号:US968048

    申请日:1978-12-11

    IPC分类号: G06F12/08 G06F12/12 G06F9/30

    CPC分类号: G06F12/126

    摘要: A cache unit includes a cache store organized into a number of levels to provide a fast access to instructions and data words. Directory circuits, associated with the cache store, contain address information identifying those instructions and data words stored in the cache store. The cache unit has at least one instruction register for storing address and level signals for specifying the location of the next instruction to be fetched and transferred to the processing unit. Replacement circuits are included which, during normal operation, assign cache locations sequentially for replacing old information with new information. The cache unit further includes detection apparatus for detecting a conflict condition resulting in an improper assignment. The detection apparatus, upon detecting such a condition, advances the relacement circuits forward for assigning the next sequential group of locations or level inhibiting it from making its normal location assignment. It also inhibits the directory circuits from writing the necessary information therein required for making the location assignment and prevents the information which produced the conflict from being written into cache store when received from memory.

    摘要翻译: 缓存单元包括组织成多个级别的缓存存储器,以提供对指令和数据字的快速访问。 与缓存存储相关联的目录电路包含标识存储在高速缓存存储器中的那些指令和数据字的地址信息。 高速缓存单元具有至少一个用于存储地址和电平信号的指令寄存器,用于指定要提取并传送到处理单元的下一个指令的位置。 包括替换电路,其在正常操作期间顺序地分配高速缓存位置以用新信息替换旧信息。 高速缓存单元还包括用于检测导致不正确分配的冲突状态的检测装置。 检测装置在检测到这样的状况时,前进用于分配下一个连续的位置组或级别的替换电路以防止其正常的位置分配。 它还禁止目录电路在进行位置分配时写入其中所需的必要信息,并且在从存储器接收时防止产生冲突的信息被写入高速缓存存储器。

    Method and apparatus for exhaustively testing interactions among multiple processors
    7.
    发明授权
    Method and apparatus for exhaustively testing interactions among multiple processors 有权
    用于彻底测试多个处理器之间的交互的方法和装置

    公开(公告)号:US06249880B1

    公开(公告)日:2001-06-19

    申请号:US09156378

    申请日:1998-09-17

    IPC分类号: H02H305

    CPC分类号: G06F11/24 G06F11/2242

    摘要: Interactions among multiple processors (92) are exhaustively tested. A master processor (92) retrieves test information for a set of tests from a test table (148). It then enters a series of embedded loops, with one loop for each of the tested processors (92). A cycle delay count for each of the tested processors (92) is incremented (152, 162, 172) through a range specified in the test table entry. For each combination of cycle delay count loop indices, a single test is executed (176). In each such test (176), the master processor (92) sets up (182) each of the other processors (92) being tested. This setup (182) specifies the delay count and the code for that processor (92) to execute. When each processor (92) is setup (182), it waits (192) for a synchronize interrupt (278). When all processors (92) have been setup (182), the master processor (92) issues (191) the synchronize interrupt signal (276). Each processor (92) then starts traces (193) and delays (194) the specified number of cycles. After the delay, the processor (92) executes its test code (195).

    摘要翻译: 多处理器之间的相互作用(92)进行了详尽的测试。 主处理器(92)从测试表(148)检索一组测试的测试信息。 然后,它进入一系列嵌入式循环,每个测试处理器(92)有一个循环。 每个测试处理器(92)的周期延迟计数通过测试表条目中指定的范围递增(152,162,172)。 对于循环延迟计数循环指标的每个组合,执行单个测试(176)。 在每个这样的测试(176)中,主处理器(92)建立(182)被测试的每个其他处理器(92)。 该设置(182)指定延迟计数和该处理器(92)执行的代码。 当每个处理器(92)被建立(182)时,它等待(192)同步中断(278)。 当所有处理器(92)已经建立(182)时,主处理器(92)发出(191)同步中断信号(276)。 每个处理器(92)然后开始指定数量的循环的迹线(193)和延迟(194)。 在延迟之后,处理器(92)执行其测试代码(195)。

    Data processing system processor delay instruction
    8.
    发明授权
    Data processing system processor delay instruction 有权
    数据处理系统处理器延时指令

    公开(公告)号:US06230263B1

    公开(公告)日:2001-05-08

    申请号:US09156376

    申请日:1998-09-17

    IPC分类号: G06F930

    CPC分类号: G06F9/30079

    摘要: A processor (92) in a data processing system (80) provides a DELAY instruction. Executing the DELAY instruction causes the processor (92) to a specified integral number of clock (98) cycles before continuing. Delays are guaranteed to have a linear relationship with a constant slope with the specified number of clock cycles. Incrementing the specified delay through a range allows exhaustive testing of interactions among multiple processors.

    摘要翻译: 数据处理系统(80)中的处理器(92)提供DELAY指令。 执行DELAY指令使处理器(92)在指定的整数时钟(98)周期之前继续。 延迟保证与具有指定时钟周期数的恒定斜率具有线性关系。 通过一个范围增加指定的延迟允许对多个处理器之间的交互进行详尽的测试。

    Controllably operable method and apparatus for predicting addresses of
future operand requests by examination of addresses of prior cache
misses
    9.
    发明授权
    Controllably operable method and apparatus for predicting addresses of future operand requests by examination of addresses of prior cache misses 失效
    可控制的可操作的方法和装置,用于通过检查先前的高速缓存未命中的地址来预测未来的操作数请求的地址

    公开(公告)号:US5694572A

    公开(公告)日:1997-12-02

    申请号:US841687

    申请日:1992-02-26

    申请人: Charles P. Ryan

    发明人: Charles P. Ryan

    摘要: In a data processing system which employs a cache memory feature, a method and exemplary special purpose apparatus for practicing the method are disclosed to lower the cache miss ratio for called operands. Recent cache misses are stored in a first in, first out miss stack, and the stored addresses are searched for displacement patterns thereamong. Any detected pattern is then employed to predict a succeeding cache miss by prefetching from main memory the signal identified by the predictive address. The apparatus for performing this task is preferably hard wired for speed purposes and includes subtraction circuits for evaluating variously displaced addresses in the miss stack and comparator circuits for determining if the outputs from at least two subtraction circuits are the same indicating a pattern yielding information which can be combined with an address in the stack to develop a predictive address. The cache miss prediction mechanism is only selectively enabled during cache "in-rush" following a process change to increase the recovery rate; thereafter, it is disabled, based upon timing-out a timer or reaching a hit ratio threshold, in order that normal procedures allow the hit ratio to stabilize at a higher percentage than if the cache miss prediction mechanism were operated continuously.

    摘要翻译: 在采用高速缓冲存储器特征的数据处理系统中,公开了一种用于实施该方法的方法和示例性专用设备,用于降低被叫操作数的高速缓存未命中率。 最近的高速缓存未命中被存储在先入先出的堆栈中,并且存储的地址被搜索到位移模式。 然后,通过从主存储器预取由预测地址识别的信号,随后采用任何检测到的模式来预测随后的高速缓存未命中。 用于执行该任务的装置优选地用于速度目的是硬连线的,并且包括用于评估未命令堆栈中的各种移位的地址的减法电路和用于确定来自至少两个减法电路的输出是否相同的指示可以 与堆栈中的地址组合以开发预测地址。 高速缓存未命中预测机制仅在过程变化缓存“急速”中有选择地启用,以提高恢复速率; 此后,基于定时器的定时器或达到命中率阈值来禁用它,以便正常程序允许命中率稳定在比高速缓存未命中预测机制连续运行的更高的百分比。

    Cache miss prediction method and apparatus for use with a paged main
memory in a data processing system
    10.
    发明授权
    Cache miss prediction method and apparatus for use with a paged main memory in a data processing system 失效
    用于数据处理系统中的分页主存储器的缓存未命中预测方法和装置

    公开(公告)号:US5450561A

    公开(公告)日:1995-09-12

    申请号:US921825

    申请日:1992-07-29

    申请人: Charles P. Ryan

    发明人: Charles P. Ryan

    IPC分类号: G06F12/08 G06F12/02

    CPC分类号: G06F12/0862 G06F2212/6026

    摘要: In a data processing system which employs a cache memory feature, a method and exemplary special purpose apparatus for practicing the method are disclosed to lower the cache miss ratio for called operands. Recent cache misses are stored in a first in, first out miss stack, and the stored addresses are searched for displacement patterns thereamong. Any detected pattern is then employed to predict a succeeding cache miss by prefetching from main memory the signal identified by the predictive address. The apparatus for performing this task is preferably hard wired for speed purposes and includes subtraction circuits for evaluating variously displaced addresses in the miss stack and comparator circuits for determining if the outputs from at least two subtraction circuits are the same indicating a pattern yielding information which can be combined with an address in the stack to develop a predictive address. The efficiency of the apparatus operating in an environment incorporating a paged main memory is improved, according to the invention, by the addition of logic circuitry which serves to inhibit prefetch if a page boundary would be encountered.

    摘要翻译: 在采用高速缓冲存储器特征的数据处理系统中,公开了一种用于实施该方法的方法和示例性专用设备,用于降低被叫操作数的高速缓存未命中率。 最近的高速缓存未命中被存储在先入先出的堆栈中,并且存储的地址被搜索到位移模式。 然后,通过从主存储器预取由预测地址识别的信号,随后采用任何检测到的模式来预测随后的高速缓存未命中。 用于执行该任务的装置优选地用于速度目的是硬连线的,并且包括用于评估未命令堆栈中的各种移位的地址的减法电路和用于确定来自至少两个减法电路的输出是否相同的指示可以 与堆栈中的地址组合以开发预测地址。 根据本发明,在包含分页主存储器的环境中操作的设备的效率得到改善,通过添加用于在遇到页边界时用于禁止预取的逻辑电路。