Data processing system processor dynamic selection of internal signal tracing
    1.
    发明授权
    Data processing system processor dynamic selection of internal signal tracing 有权
    数据处理系统处理器动态选择内部信号跟踪

    公开(公告)号:US06530076B1

    公开(公告)日:2003-03-04

    申请号:US09472114

    申请日:1999-12-23

    IPC分类号: G06F944

    摘要: A processor (92) contains a Trace RAM (210) for tracing internal processor signals and operands. A first trace mode separately traces microcode instruction execution and cache controller execution. Selectable groups of signals are traced from both the cache controller (256) and the arithmetic (AX) processor (260). A second trace mode selectively traces full operand words that result from microcode instruction (242). Each microcode instruction word (242) has a trace enable bit (244) that when enabled causes the results of that microcode instruction (242) to be recorded in the Trace RAM (210).

    摘要翻译: 处理器(92)包含用于跟踪内部处理器信号和操作数的跟踪RAM(210)。 第一个跟踪模式分别跟踪微代码指令执行和高速缓存控制器执行。 高速缓存控制器(256)和算术(AX)处理器(260)都可追踪可选组的信号。 第二跟踪模式选择性地跟踪由微代码指令(242)产生的全部操作数字。 每个微代码指令字(242)具有跟踪使能位(244),当使能时,使得该微代码指令(242)的结果被记录在跟踪RAM(210)中。

    Apparatus for synchronizing multiple processors in a data processing system
    2.
    发明授权
    Apparatus for synchronizing multiple processors in a data processing system 有权
    用于在数据处理系统中同步多个处理器的装置

    公开(公告)号:US06223228B1

    公开(公告)日:2001-04-24

    申请号:US09156377

    申请日:1998-09-17

    IPC分类号: G06F112

    摘要: Two instructions are provided to synchronize multiple processors (92) in a data processing system (80). A Transmit Sync instruction (TSYNC) transmits a synchronize processor interrupt (276) to all of the active processors (92) in the system (80). Processors (92) wait for receipt of the synchronize signal (278) by executing a Wait for Sync (WSYNC) instruction. Each of the processors waiting for such a signal (278) is activated at the next clock cycle after receipt of the interrupt signal (278). An optional timeout value is provided to protect against hanging a waiting processor (92) that misses the interrupt (278). Whenever the WSYNC instruction is activated by receipt of the interrupt (278), a trace is started to trace a fixed number of events to an internal Trace Cache (58).

    摘要翻译: 提供两个指令以同步数据处理系统(80)中的多个处理器(92)。 发送同步指令(TSYNC)向系统(80)中的所有活动处理器(92)发送同步处理器中断(276)。 处理器(92)通过执行等待同步(WSYNC)指令等待接收同步信号(278)。 等待这种信号(278)的每个处理器在接收到中断信号(278)之后的下一个时钟周期被激活。 提供可选的超时值以防止挂起错过中断的等待处理器(92)(278)。 每当通过接收到中断(278)激活WSYNC指令时,将启动跟踪以将固定数量的事件跟踪到内部跟踪缓存(58)。

    Cache unit with transit block buffer apparatus
    3.
    发明授权
    Cache unit with transit block buffer apparatus 失效
    具有传输块缓冲装置的缓存单元

    公开(公告)号:US4217640A

    公开(公告)日:1980-08-12

    申请号:US968522

    申请日:1978-12-11

    IPC分类号: G06F12/08 G06F13/00

    CPC分类号: G06F12/0855

    摘要: A data processing system comprises a data processing unit coupled to a cache unit which couples to a main store. The cache unit includes a cache store organized into a plurality of levels, each for storing a number of blocks of information in the form of data and instructions. Directories associated with the cache store contain addresses and level control information for indicating which blocks of information reside in the cache store. The cache unit further includes control apparatus and a transit block buffer comprising a number of sections each having a plurality of locations for storing read commands and transit block addresses associated therewith. A corresponding number of valid bit storage elements are included, each of which is set to a binary ONE state when a read command and the associated transit block address are loaded into a corresponding one of the buffer locations. Comparison circuits, coupled to the transit block buffer, compare the transit block address of each outstanding read command stored in the transit block buffer section with the address of each read command or write command received from the processing unit. When there is a conflict, the comparison circuits generate an output signal which conditions the control apparatus to hold or stop further processing of the command by the cache unit and the operation of the processing unit. Holding lasts until the valid bit storage element of the location storing the outstanding read command is reset to a binary ZERO indicating that execution of the read command is completed.

    摘要翻译: 数据处理系统包括耦合到耦合到主存储器的高速缓存单元的数据处理单元。 高速缓存单元包括组织成多个级别的缓存存储器,每个级别用于以数据和指令的形式存储多个信息块。 与高速缓存存储相关联的目录包含用于指示哪些信息块驻留在高速缓存存储器中的地址和级别控制信息。 高速缓存单元还包括控制装置和传输块缓冲器,其包括多个部分,每个部分具有用于存储读取命令的多个位置和与其相关联的传输块地址。 包括相应数量的有效位存储元件,当将读取命令和相关联的传输块地址加载到相应的一个缓冲器位置时,其中的每一个被设置为二进制ONE状态。 耦合到传输块缓冲器的比较电路将存储在传输块缓冲器部分中的每个未完成读取命令的传输块地址与从处理单元接收的每个读取命令或写入命令的地址进行比较。 当存在冲突时,比较电路产生输出信号,该输出信号使控制装置保持或停止高速缓存单元对命令的进一步处理和处理单元的操作。 持续持续,直到存储未完成读取命令的位置的有效位存储元件被重置为指示执行读命令的二进制零。

    Equal access to prevent gateword dominance in a multiprocessor write-into-cache environment
    4.
    发明授权
    Equal access to prevent gateword dominance in a multiprocessor write-into-cache environment 有权
    在多处理器写入高速缓存环境中等同的访问来防止门字优势

    公开(公告)号:US06970977B2

    公开(公告)日:2005-11-29

    申请号:US10403703

    申请日:2003-03-31

    IPC分类号: G06F12/08 G06F12/00

    CPC分类号: G06F12/084

    摘要: In a multiprocessor write-into-cache data processing system including: a memory; at least first and second shared caches; a system bus coupling the memory and the shared caches; at least one processor having a private cache coupled, respectively, to each shared cache; method and apparatus for preventing hogging of ownership of a gateword stored in the memory which governs access to common code/data shared by processes running in the processors by which a read copy of the gateword is obtained by a given processor by performing successive swap operations between the memory and the given processor's shared cache, and the given processor's shared cache and private cache. If the gateword is found to be OPEN, it is CLOSEd by the given processor, and successive swap operations are performed between the given processor's private cache and shared cache and shared cache and memory to write the gateword CLOSEd in memory such that the given processor obtains exclusive access to the governed common code/data. When the given processor completes use of the common code/data, it writes the gateword OPEN in its private cache, and successive swap operations are performed between the given processor's private cache and shared cache and shared cache and memory to write the gateword OPEN in memory.

    摘要翻译: 一种多处理器写入高速缓存数据处理系统,包括:存储器; 至少第一和第二共享高速缓存; 耦合存储器和共享缓存的系统总线; 至少一个处理器具有分别耦合到每个共享高速缓存的专用高速缓存; 方法和装置,用于防止存储在存储器中的门字的所有权,其控制对在处理器中运行的进程共享的共同代码/数据的访问,通过该处理器,由给定处理器通过执行连续的交换操作来获得门字的读取副本 内存和给定处理器的共享缓存,以及给定的处理器的共享缓存和专用缓存。 如果门字被发现是OPEN,则由给定的处理器关闭,并且在给定处理器的专用高速缓存和共享高速缓存之间执行连续的交换操作,并且共享高速缓存和存储器将门字CLOSEd写入存储器,使得给定的处理器获得 独占访问受管制的通用代码/数据。 当给定的处理器完成使用通用代码/数据时,它将门字OPEN写入其专用缓存,并且在给定处理器的专用高速缓存和共享高速缓存之间执行连续的交换操作,共享高速缓存和存储器将门槛OPEN写入存储器 。

    High integrity cache directory
    5.
    发明授权
    High integrity cache directory 有权
    高完整性缓存目录

    公开(公告)号:US06898738B2

    公开(公告)日:2005-05-24

    申请号:US09907302

    申请日:2001-07-17

    摘要: Cache memory, and thus computer system, reliability is increased by duplicating cache tag entries. Each cache tag has a primary entry and a duplicate entry. Then, when cache tags are associatively searched, both the primary and the duplicate entry are compared to the search value. At the same time, they are also parity checked and compared against each other. If a match is made on either the primary entry or the duplicate entry, and that entry does not have a parity error, a cache “hit” is indicated. All single bit cache tag parity errors are detected and compensated for. Almost all multiple bit cache tag parity errors are detected.

    摘要翻译: 通过复制缓存标签条目,缓存内存,从而提高计算机系统的可靠性。 每个缓存标签都有一个主条目和一个重复条目。 然后,当关联搜索缓存标签时,将主条目和重复条目都与搜索值进行比较。 同时,他们也是平等检查和相互比较。 如果在主条目或重复条目上进行匹配,并且该条目没有奇偶校验错误,则指示缓存“命中”。 检测和补偿所有单位缓存标签奇偶校验错误。 检测到几乎所有多个位缓存标签奇偶校验错误。

    Method and apparatus for exhaustively testing interactions among multiple processors
    6.
    发明授权
    Method and apparatus for exhaustively testing interactions among multiple processors 有权
    用于彻底测试多个处理器之间的交互的方法和装置

    公开(公告)号:US06249880B1

    公开(公告)日:2001-06-19

    申请号:US09156378

    申请日:1998-09-17

    IPC分类号: H02H305

    CPC分类号: G06F11/24 G06F11/2242

    摘要: Interactions among multiple processors (92) are exhaustively tested. A master processor (92) retrieves test information for a set of tests from a test table (148). It then enters a series of embedded loops, with one loop for each of the tested processors (92). A cycle delay count for each of the tested processors (92) is incremented (152, 162, 172) through a range specified in the test table entry. For each combination of cycle delay count loop indices, a single test is executed (176). In each such test (176), the master processor (92) sets up (182) each of the other processors (92) being tested. This setup (182) specifies the delay count and the code for that processor (92) to execute. When each processor (92) is setup (182), it waits (192) for a synchronize interrupt (278). When all processors (92) have been setup (182), the master processor (92) issues (191) the synchronize interrupt signal (276). Each processor (92) then starts traces (193) and delays (194) the specified number of cycles. After the delay, the processor (92) executes its test code (195).

    摘要翻译: 多处理器之间的相互作用(92)进行了详尽的测试。 主处理器(92)从测试表(148)检索一组测试的测试信息。 然后,它进入一系列嵌入式循环,每个测试处理器(92)有一个循环。 每个测试处理器(92)的周期延迟计数通过测试表条目中指定的范围递增(152,162,172)。 对于循环延迟计数循环指标的每个组合,执行单个测试(176)。 在每个这样的测试(176)中,主处理器(92)建立(182)被测试的每个其他处理器(92)。 该设置(182)指定延迟计数和该处理器(92)执行的代码。 当每个处理器(92)被建立(182)时,它等待(192)同步中断(278)。 当所有处理器(92)已经建立(182)时,主处理器(92)发出(191)同步中断信号(276)。 每个处理器(92)然后开始指定数量的循环的迹线(193)和延迟(194)。 在延迟之后,处理器(92)执行其测试代码(195)。

    Data processing system processor delay instruction
    7.
    发明授权
    Data processing system processor delay instruction 有权
    数据处理系统处理器延时指令

    公开(公告)号:US06230263B1

    公开(公告)日:2001-05-08

    申请号:US09156376

    申请日:1998-09-17

    IPC分类号: G06F930

    CPC分类号: G06F9/30079

    摘要: A processor (92) in a data processing system (80) provides a DELAY instruction. Executing the DELAY instruction causes the processor (92) to a specified integral number of clock (98) cycles before continuing. Delays are guaranteed to have a linear relationship with a constant slope with the specified number of clock cycles. Incrementing the specified delay through a range allows exhaustive testing of interactions among multiple processors.

    摘要翻译: 数据处理系统(80)中的处理器(92)提供DELAY指令。 执行DELAY指令使处理器(92)在指定的整数时钟(98)周期之前继续。 延迟保证与具有指定时钟周期数的恒定斜率具有线性关系。 通过一个范围增加指定的延迟允许对多个处理器之间的交互进行详尽的测试。

    Instruction buffer associated with a cache memory unit
    8.
    发明授权
    Instruction buffer associated with a cache memory unit 失效
    与高速缓冲存储器单元相关联的指令缓冲器

    公开(公告)号:US4521850A

    公开(公告)日:1985-06-04

    申请号:US433569

    申请日:1982-10-04

    IPC分类号: G06F9/38 G06F9/12

    CPC分类号: G06F9/3804

    摘要: Apparatus and method for providing an improved instruction buffer associated with a cache memory unit. The instruction buffer is utilized to transmit to the control unit of the central processing unit a requested sequence of data groups. In the current invention, the instruction buffer can store two sequences of data groups. The instruction buffer can store the data group sequence for the procedure currently in execution by the data processing unit and can simultaneously store data groups to which transfer, either conditional or unconditional, has been identified in the sequence currently being executed. In addition, the instruction buffer provides signals for use by the central processing unit defining the status of the instruction buffer.

    摘要翻译: 用于提供与高速缓冲存储器单元相关联的改进的指令缓冲器的装置和方法。 指令缓冲器用于向中央处理单元的控制单元发送所请求的数据组序列。 在本发明中,指令缓冲器可以存储两个数据组序列。 指令缓冲器可以存储数据处理单元当前正在执行的过程的数据组序列,并且可以同时存储在当前执行的序列中已经被识别的有条件的或无条件的传输的数据组。 此外,指令缓冲器提供由定义指令缓冲器的状态的中央处理单元使用的信号。

    Data processing system programmable pre-read capability
    9.
    发明授权
    Data processing system programmable pre-read capability 失效
    数据处理系统可编程预读功能

    公开(公告)号:US4371927A

    公开(公告)日:1983-02-01

    申请号:US131739

    申请日:1980-03-20

    摘要: A data processing system includes a cache store to provide an interface with a main storage unit for a central processing unit. The central processing unit includes a microprogram control unit in addition to control circuits for establishing the sequencing of the processing unit during the execution of program instructions. Both the microprogram control unit and control circuits include means for generating pre-read commands to the cache store in conjunction with normal processing operations during the processing of certain types of instructions. In response to pre-read commands, the cache store, during predetermined points of the processing of each such instruction, fetches information which is required by such instruction at a later point in the processing thereof.

    摘要翻译: 数据处理系统包括高速缓存存储器,用于向中央处理单元提供与主存储单元的接口。 中央处理单元除了用于在执行程序指令期间建立处理单元的排序的控制电路之外还包括微程序控制单元。 微程序控制单元和控制电路都包括用于在处理某些类型的指令期间结合正常处理操作来向高速缓存存储器生成预读命令的装置。 响应于预读命令,高速缓存存储器在每个这样的指令的处理的预定点期间,在其处理的稍后点获取这种指令所需的信息。

    Gate close failure notification for fair gating in a nonuniform memory architecture data processing system
    10.
    发明授权
    Gate close failure notification for fair gating in a nonuniform memory architecture data processing system 有权
    门不合格故障通知,用于在不均匀的内存架构数据处理系统中进行公平门控

    公开(公告)号:US06480973B1

    公开(公告)日:2002-11-12

    申请号:US09409456

    申请日:1999-09-30

    IPC分类号: G06F1100

    CPC分类号: G06F9/526

    摘要: In a NUMA architecture, processors in the same CPU module with a processor opening a spin gate tend to have preferential access to a spin gate in memory when attempting to close the spin gate. This “unfair” memory access to the desired spin gate can result in starvation of processors from other CPU modules. This problem is solved by “balking” or delaying a specified period of time before attempting to close a spin gate whenever either one of the processors in the same CPU module just opened the desired spin gate, or when a processor in another CPU module is spinning trying to close the spin gate. Each processor detects when it is spinning on a spin gate. It then transmits that information to the processors in other CPU modules, allowing them to balk when opening spin gates.

    摘要翻译: 在NUMA体系结构中,与打开自旋门的处理器相同的CPU模块中的处理器在尝试关闭旋转门时倾向于优先访问存储器中的自旋门。 对所需旋转门的这种“不公平”存储器访问可能导致处理器从其他CPU模块的饥饿。 在同一个CPU模块中的任何一个处理器刚刚打开所需的旋转门之前,或者当另一个CPU模块中的处理器旋转时,这个问题就是在尝试关闭旋转门之前“指定一段时间”来解决 试图关闭旋转门。 每个处理器检测何时在旋转门上旋转。 然后将该信息发送到其他CPU模块中的处理器,允许它们在打开旋转门时阻塞。