System utilizing mastering and snooping circuitry that operate in
response to clock signals having different frequencies generated by the
communication controller
    61.
    发明授权
    System utilizing mastering and snooping circuitry that operate in response to clock signals having different frequencies generated by the communication controller 失效
    利用由通信控制器产生的具有不同频率的时钟信号而工作的母带和窥探电路的系统

    公开(公告)号:US5958011A

    公开(公告)日:1999-09-28

    申请号:US829579

    申请日:1997-03-31

    IPC分类号: G06F11/30 G06F13/14

    CPC分类号: G06F13/423

    摘要: A data processing system and method of communicating data in a data processing system are described. The data processing system includes a communication network to which a plurality of devices are coupled. At least one device among the plurality of devices coupled to the communication network includes mastering circuitry and snooping circuitry. According to the method, a first timing signal having a first frequency and a second timing signal having a second frequency different from the first frequency are generated. Communication transactions on the communication network are initiated utilizing the mastering circuitry, which operates in response to the first timing signal, and are monitored utilizing the snooping circuitry, which operates in response to the second timing signal.

    摘要翻译: 描述了在数据处理系统中传送数据的数据处理系统和方法。 数据处理系统包括多个设备耦合到的通信网络。 耦合到通信网络的多个设备中的至少一个设备包括母盘制作电路和窥探电路。 根据该方法,产生具有第一频率的第一定时信号和具有与第一频率不同的第二频率的第二定时信号。 在通信网络上的通信交易是利用主控电路来启动的,该母盘控制电路响应于第一定时信号进行操作,并且利用响应于第二定时信号而工作的监听电路进行监视。

    Method and system for front-end gathering of store instructions within a
data-processing system
    62.
    发明授权
    Method and system for front-end gathering of store instructions within a data-processing system 失效
    数据处理系统中存储指令前端收集的方法和系统

    公开(公告)号:US5940611A

    公开(公告)日:1999-08-17

    申请号:US837519

    申请日:1997-04-14

    IPC分类号: G06F9/312 G06F9/38 G06F9/30

    CPC分类号: G06F9/30043 G06F9/3824

    摘要: A method and system for front-end gathering of store instructions within a processor is disclosed. In accordance with the method and system of the present invention, a store queue within a data-processing system includes a front-end queue and a back-end queue. Multiple entries are provided in the back-end queue, and each entry includes an address field, a byte-count field, and a data field. A determination is first made as to whether or not a data field of a first entry of the front-end queue is filled completely. In response to a determination that the data field of the first entry of the front-end queue is not filled completely, another determination is made as to whether or not an address for a store instruction in a subsequent second entry is equal to an address for the store instruction in the first entry plus a byte count in the first entry. In response to a determination that the address for the store instruction in the subsequent second entry is equal to the address for the store instruction in the first entry plus the byte count in the first entry, the store instruction in the subsequent second entry is collapsed into the store instruction in the first entry.

    摘要翻译: 公开了一种用于处理器内存储指令前端收集的方法和系统。 根据本发明的方法和系统,数据处理系统内的存储队列包括前端队列和后端队列。 在后端队列中提供多个条目,每个条目包括地址字段,字节计数字段和数据字段。 首先确定前端队列的第一条目的数据字段是否被完全填充。 响应于确定前端队列的第一条目的数据字段未被完全填充,另外确定在后续第二条目中的存储指令的地址是否等于 第一个条目中的存储指令加上第一个条目中的字节数。 响应于确定后续第二条目中的存储指令的地址等于第一条目中的存储指令的地址加上第一条目中的字节计数,则随后的第二条目中的存储指令被折叠成 商店指令在第一个条目。

    Method and system for controlling access to a shared resource that each
requestor is concurrently assigned at least two pseudo-random priority
weights
    63.
    发明授权
    Method and system for controlling access to a shared resource that each requestor is concurrently assigned at least two pseudo-random priority weights 失效
    用于控制对共享资源的访问的方法和系统,其中至少一个请求者被同时分配至少两个伪随机优先权重

    公开(公告)号:US5931924A

    公开(公告)日:1999-08-03

    申请号:US839437

    申请日:1997-04-14

    CPC分类号: G06F13/364

    摘要: A method and system for controlling access to a shared resource in a data processing system are described. According to the method, a number of requests for access to the resource are generated by a number of requesters that share the resource. Each of the requesters is associated with a priority weight that indicates a probability that the associated requester will be assigned a highest current priority. Each requester is then assigned a current priority that is determined substantially randomly with respect to previous priorities of the requesters. In response to the current priorities of the requesters, a request for access to the resource is granted. In one embodiment, a requester corresponding to a granted request is signaled that its request has been granted, and a requester corresponding to a rejected request is signaled that its request was not granted.

    摘要翻译: 描述了用于控制对数据处理系统中的共享资源的访问的方法和系统。 根据该方法,通过共享资源的多个请求者生成对资源的访问的多个请求。 每个请求者与优先级权重相关联,该权重指示相关请求者将被分配最高当前优先级的概率。 然后分配每个请求者相对于请求者的先前优先级基本随机确定的当前优先级。 为响应请求者的当前优先级,授予访问资源的请求。 在一个实施例中,与被许可的请求相对应的请求者用信号通知其请求已经被许可,并且与被拒绝的请求相对应的请求者用信号通知其请求未被授予。

    Method and system for allocating data among cache memories within a
symmetric multiprocessor data-processing system
    64.
    发明授权
    Method and system for allocating data among cache memories within a symmetric multiprocessor data-processing system 失效
    用于在对称多处理器数据处理系统内的高速缓冲存储器之间分配数据的方法和系统

    公开(公告)号:US5893163A

    公开(公告)日:1999-04-06

    申请号:US992135

    申请日:1997-12-17

    IPC分类号: G06F12/08

    CPC分类号: G06F12/084 G06F12/0811

    摘要: A method and system for allocating data among cache memories within a symmetric multiprocessor data-processing system are disclosed. The symmetric multiprocessor data-processing system includes a system memory and multiple processing units, wherein each of the processing units has a cache memory. The system memory is divided into a number of segments, wherein the number of segments is equal to the total number of cache memories. Each of these segments is represented by one of the cache memories such that a cache memory is responsible to cache data from its associated segment within the system memory.

    摘要翻译: 公开了一种用于在对称多处理器数据处理系统内的高速缓存存储器中分配数据的方法和系统。 对称多处理器数据处理系统包括系统存储器和多个处理单元,其中每个处理单元具有高速缓冲存储器。 系统存储器被分成多个段,其中段的数量等于高速缓冲存储器的总数。 这些段中的每一个由高速缓冲存储器之一表示,使得高速缓冲存储器负责缓存来自系统存储器内的相关段的数据。

    Data processing system having demand based write through cache with
enforced ordering
    65.
    发明授权
    Data processing system having demand based write through cache with enforced ordering 失效
    数据处理系统具有基于需求的写入通过缓存执行排序

    公开(公告)号:US5796979A

    公开(公告)日:1998-08-18

    申请号:US730994

    申请日:1996-10-16

    IPC分类号: G06F12/08 G06F13/12

    摘要: A data processing system includes a processor, a system memory, one or more input/output channel controllers (IOCC), and a system bus connecting the processor, the memory and the IOCCs together for communicating instructions, address and data between the various elements of a system. The IOCC includes a paged cache storage having a number of lines wherein each line of the page may be, for example, 32 bytes. Each page in the cache also has several attribute bits for that page including the so called WIM and attribute bits. The W bit is for controlling write through operations; the I bit controls cache inhibit; and the M bit controls memory coherency. Since the IOCC is unaware of these page table attribute bits for the cache lines being DMAed to system memory, IOCC must maintain memory consistency and cache coherency without sacrificing performance. For DMA write data to system memory, new cache attributes called global, cachable and demand based write through are created. Individual writes within a cache line are gathered by the IOCC and only written to system memory when the I/O bus master accesses a different cache line or relinquishes the I/O bus.

    摘要翻译: 数据处理系统包括处理器,系统存储器,一个或多个输入/输出通道控制器(IOCC)以及将处理器,存储器和IOCC连接在一起的系统总线,用于在各种元件之间传送指令,地址和数据 一个系统。 IOCC包括具有多行的分页缓存存储器,其中页面的每行可以是例如32字节。 缓存中的每个页面还具有该页面的几个属性位,包括所谓的WIM和属性位。 W位用于控制写操作; I位控制缓存抑制; M位控制存储器一致性。 由于IOCC不知道将这些页表属性位用于高速缓存行被DMA映射到系统内存,因此IOCC必须保持内存一致性和高速缓存一致性,而不会牺牲性能。 对于将DMA写入数据到系统内存,创建了称为全局,可高速缓存和基于需求的写入的新缓存属性。 高速缓存行中的单独写入由IOCC收集,只有当I / O总线主机访问不同的高速缓存行或放弃I / O总线时才写入系统存储器。

    Low latency error reporting for high performance bus
    66.
    发明授权
    Low latency error reporting for high performance bus 失效
    高性能总线的低延迟错误报告

    公开(公告)号:US5771247A

    公开(公告)日:1998-06-23

    申请号:US611439

    申请日:1996-03-04

    IPC分类号: G06F11/10

    摘要: A system and method are provided that use a determination of bad data parity and the state of an error signal (Derr.sub.--) as a functional signal indicating a specific type of error in a particular system component. If the Derr.sub.-- signal is active, the parity error recognized by the CPU was caused by a correctable condition in a data providing device. In this instance, the processor will read the corrected data from a buffer without reissuing a fetch request. When the CPU finds a parity error, but Derr.sub.-- is not active a more serious fault condition is identified (bus error or uncorrectable multibit error) requiring a machine level interrupt, or the like. And, when no parity is found by the CPU and Derr.sub.-- is not active, then the data is known to be valid and the parity/ECC latency is eliminated, thereby saving processing cycle time.

    摘要翻译: 提供一种系统和方法,其使用错误数据奇偶校验的确定和误差信号(Derr-)的状态作为指示特定系统组件中的特定类型的错误的功能信号。 如果Derr信号有效,则由CPU识别的奇偶校验错误是由数据提供设备中的可纠正条件引起的。 在这种情况下,处理器将从缓冲区读取校正的数据,而不重新发出读取请求。 当CPU发现奇偶校验错误,但是Derr不活动时,识别出需要机器级别中断的更严重的故障状况(总线错误或不可校正的多位错误)等。 而且,当CPU没有找到奇偶校验,并且Derr-没有激活时,数据被认为是有效的,并且奇偶校验/ ECC等待时间被消除,从而节省处理周期时间。

    Event-based dynamic resource provisioning
    67.
    发明授权
    Event-based dynamic resource provisioning 有权
    基于事件的动态资源配置

    公开(公告)号:US08977752B2

    公开(公告)日:2015-03-10

    申请号:US12424893

    申请日:2009-04-16

    IPC分类号: G06F15/173 G06F15/16 G06F9/50

    CPC分类号: G06F9/5011 G06F9/5061

    摘要: Disclosed are a method, a system and a computer program product for automatically allocating and de-allocating resources for jobs executed or processed by one or more supercomputer systems. In one or more embodiments, a supercomputing system can process multiple jobs with respective supercomputing resources. A global resource manager can automatically allocate additional resources to a first job and de-allocate resources from a second job. In one or more embodiments, the global resource manager can provide the de-allocated resources to the first job as additional supercomputing resources. In one or more embodiments, the first job can use the additional supercomputing resources to perform data analysis at a higher resolution, and the additional resources can compensate for an amount of time the higher resolution analysis would take using originally allocated supercomputing resources.

    摘要翻译: 公开了一种用于为由一个或多个超级计算机系统执行或处理的作业自动分配和分配资源的方法,系统和计算机程序产品。 在一个或多个实施例中,超级计算系统可以使用相应的超级计算资源处理多个作业。 全局资源管理器可以自动为第一个作业分配额外的资源,并从第二个作业中分配资源。 在一个或多个实施例中,全局资源管理器可以将去分配的资源作为附加的超级计算资源提供给第一作业。 在一个或多个实施例中,第一作业可以使用额外的超级计算资源以更高的分辨率执行数据分析,并且附加资源可以补偿使用原始分配的超级计算资源的更高分辨率分析所花费的时间量。

    Data processing system with backplane and processor books configurable to support both technical and commercial workloads
    70.
    发明授权
    Data processing system with backplane and processor books configurable to support both technical and commercial workloads 失效
    具有背板和处理器书籍的数据处理系统可配置为支持技术和商业工作负载

    公开(公告)号:US07526631B2

    公开(公告)日:2009-04-28

    申请号:US10425421

    申请日:2003-04-28

    IPC分类号: G06F15/00 G06F15/76

    CPC分类号: G06F15/8007

    摘要: A processor book designed to support both commercial workloads and technical workloads based on a dynamic or static mechanism of reconfiguring the external wiring interconnect. The processor book is configured as a building block for commercial workload processing systems with external connector buses (ECBs). The processor book is also provided with routing logic to enable to ECBs to be utilized for either book-to-book routing or routing within the same processor book. A table specific wiring scheme is provided for coupling the ECBs running off the chips of one MCM to the chips of the second MCM on the processor book so that the chips of the first MCM are connected directly to the chips of a second MCM that is logically furthest away and vice versa. Once the wiring of the ECBs are completed according to the wiring scheme, the operational and functional characteristics reflect those of a processor book configured for technical workloads.

    摘要翻译: 基于重新配置外部接线互连的动态或静态机制,处理器书旨在支持商业工作负载和技术工作负载。 处理器书被配置为具有外部连接器总线(ECB)的商业工作负载处理系统的构建块。 处理器手册还提供了路由逻辑,以使ECB能够用于同一处理器书中的书本到书籍路由或路由。 提供了一种表格特定的布线方案,用于将一个MCM的芯片上运行的ECB与处理器簿上的第二个MCM的芯片耦合,使得第一个MCM的芯片直接连接到逻辑上的第二个MCM的芯片 最远的地方,反之亦然。 一旦根据布线方案完成了ECB的接线,则其操作和功能特征反映了为技术工作负载配置的处理器书。