Edram Macro Disablement in Cache Memory
    11.
    发明申请
    Edram Macro Disablement in Cache Memory 失效
    缓存中的Edram宏禁用

    公开(公告)号:US20110320862A1

    公开(公告)日:2011-12-29

    申请号:US12822367

    申请日:2010-06-24

    IPC分类号: G06F11/27 G06F11/20 G06F11/00

    摘要: Embedded dynamic random access memory (EDRAM) macro disablement in a cache memory includes isolating an EDRAM macro of a cache memory bank, the cache memory bank being divided into at least three rows of a plurality of EDRAM macros, the EDRAM macro being associated with one of the at least three rows, iteratively testing each line of the EDRAM macro, the testing including attempting at least one write operation at each line of the EDRAM macro, determining if an error occurred during the testing, and disabling write operations for an entire row of EDRAM macros associated with the EDRAM macro based on the determining.

    摘要翻译: 高速缓冲存储器中的嵌入式动态随机存取存储器(EDRAM)宏禁用包括隔离高速缓存存储体的EDRAM宏,高速缓存存储体被划分成多个EDRAM宏的至少三行,所述EDRAM宏与一个 至少三行,迭代地测试EDRAM宏的每一行,测试包括尝试在EDRAM宏的每一行进行至少一次写操作,确定在测试期间是否发生错误,以及禁止整行的写操作 的基于确定的EDRAM宏相关联的EDRAM宏。

    Method and apparatus for parallel and serial data transfer
    12.
    发明授权
    Method and apparatus for parallel and serial data transfer 有权
    并行和串行数据传输的方法和装置

    公开(公告)号:US08122297B2

    公开(公告)日:2012-02-21

    申请号:US11874232

    申请日:2007-10-18

    IPC分类号: G06F11/00

    CPC分类号: G06F11/10

    摘要: A method and apparatus are disclosed for performing maintenance operations in a system using address, data, and controls which are transported through the system, allowing for parallel and serial operations to co-exist without the parallel operations being slowed down by the serial ones. It also provides for use of common shifters, engines, and protocols as well as efficient conversion of ECC to parity and parity to ECC as needed in the system. The invention also provides for error detection and isolation, both locally and in the reported status. The invention provides for large maintenance address and data spaces (typically 64 bits address and 64 bits data per address supported).

    摘要翻译: 公开了一种用于在通过系统传送的地址,数据和控制的系统中进行维护操作的方法和装置,允许并行和串行操作共存,而并行操作不被串行操作减慢。 它还提供了常用的移位器,引擎和协议的使用,以及系统中根据需要将ECC高效地转换为奇偶校验和奇偶校验。 本发明还提供了本地和报告状态的错误检测和隔离。 本发明提供大的维护地址和数据空间(通常支持64位地址和64位数据)。

    METHOD, SYSTEM AND COMPUTER PROGRAM PRODUCT FOR DATA BUFFERS PARTITIONED FROM A CACHE ARRAY
    13.
    发明申请
    METHOD, SYSTEM AND COMPUTER PROGRAM PRODUCT FOR DATA BUFFERS PARTITIONED FROM A CACHE ARRAY 有权
    方法,系统和计算机程序产品用于从缓存区域分配的数据缓冲区

    公开(公告)号:US20090240891A1

    公开(公告)日:2009-09-24

    申请号:US12051244

    申请日:2008-03-19

    IPC分类号: G06F12/00

    CPC分类号: G06F12/126 G06F2212/2515

    摘要: Systems, methods and computer program products for data buffers partitioned from a cache array. An exemplary embodiment includes a method in a processor and for providing data buffers partitioned from a cache array, the method including clearing cache directories associated with the processor to an initial state, obtaining a selected directory state from a control register preloaded by the service processor, in response to the control register including the desired cache state, sending load commands with an address and data, loading cache lines and cache line directory entries into the cache and storing the specified data in the corresponding cache line.

    摘要翻译: 从缓存阵列分区的数据缓冲区的系统,方法和计算机程序产品。 示例性实施例包括处理器中的方法并且用于提供从高速缓存阵列分区的数据缓冲器,该方法包括将与处理器相关联的高速缓存目录清除到初始状态,从由服务处理器预加载的控制寄存器获得所选择的目录状态, 响应于包括所需缓存状态的控制寄存器,发送具有地址和数据的加载命令,将高速缓存行和高速缓存行目录条目加载到高速缓存中并将指定的数据存储在相应的高速缓存行中。

    Disowning cache entries on aging out of the entry
    14.
    发明申请
    Disowning cache entries on aging out of the entry 失效
    在条目中老化的缓存条目不起作用

    公开(公告)号:US20070174554A1

    公开(公告)日:2007-07-26

    申请号:US11339196

    申请日:2006-01-25

    IPC分类号: G06F12/00

    摘要: Caching where portions of data are stored in slower main memory and are transferred to faster memory between one or more processors and the main memory. The cache is such that an individual cache system must communicate to other associated cache systems, or check with such cache systems, to determine if they contain a copy of a given cached location prior to or upon modification or appropriation of data at a given cached location. The cache further includes provisions for determining when the data stored in a particular memory location may be replaced.

    摘要翻译: 缓存数据的一部分存储在较慢的主存储器中,并被传送到一个或多个处理器与主存储器之间的更快的存储器。 高速缓存使得单个高速缓存系统必须与其他相关联的高速缓存系统通信,或者与这种高速缓存系统进行检查,以确定它们是否在给定的高速缓存位置修改或占用数据之前或之后包含给定缓存位置的副本 。 高速缓存还包括用于确定何时可以替换存储在特定存储器位置中的数据的规定。

    Bus protocol for a switchless distributed shared memory computer system
    15.
    发明授权
    Bus protocol for a switchless distributed shared memory computer system 失效
    总线协议用于无交换分布式共享内存计算机系统

    公开(公告)号:US06988173B2

    公开(公告)日:2006-01-17

    申请号:US10435878

    申请日:2003-05-12

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0831 G06F12/0813

    摘要: A bus protocol is disclosed for a symmetric multiprocessing computer system consisting of a plurality of nodes, each of which contains a multitude of processors, I/O devices, main memory and a system controller comprising an integrated switch with a top level cache. The nodes are interconnected by a dual concentric ring topology. The bus protocol is used to exchange snoop requests and addresses, data, coherency information and operational status between nodes in a manner that allows partial coherency results to be passed in parallel with a snoop request and address as an operation is forwarded along each ring. Each node combines it's own coherency results with the partial coherency results it received prior to forwarding the snoop request, address and updated partial coherency results to the next node on the ring. The protocol allows each node in the system to see the final coherency results without requiring the requesting node to broadcast these results to all the other nodes in the system. The bus protocol also allows data to be returned on one of the two rings, with the ring selection determined by the relative placement of the source and destination nodes on each ring, in order to control latency and data bus utilization.

    摘要翻译: 公开了一种用于由多个节点组成的对称多处理计算机系统的总线协议,每个节点包含多个处理器,I / O设备,主存储器和包括具有顶级高速缓存的集成交换机的系统控制器。 节点通过双同心环拓扑互连。 总线协议用于以一种允许部分一致性结果与窥探请求和地址并行传送的方式来交换窥探请求和地址,数据,相关性信息和节点之间的操作状态,因为操作沿着每个环转发。 每个节点将其自身的一致性结果与在将窥探请求转发之前接收的部分一致性结果相结合,将地址和更新的部分一致性结果合并到环上的下一个节点。 该协议允许系统中的每个节点查看最终的一致性结果,而不需要请求节点将这些结果广播到系统中的所有其他节点。 总线协议还允许在两个振铃中的一个上返回数据,其中环选择由每个振铃上的源节点和目的节点的相对位置确定,以便控制等待时间和数据总线的利用。

    False exception for cancelled delayed requests
    16.
    发明授权
    False exception for cancelled delayed requests 失效
    取消延迟请求的假异常

    公开(公告)号:US06219758B1

    公开(公告)日:2001-04-17

    申请号:US09047579

    申请日:1998-03-25

    IPC分类号: G06F1200

    CPC分类号: G06F12/1054

    摘要: A central processor uses virtual addresses to access data via cache logic including a DAT and ART, and the cache logic accesses data in the hierarchical storage subsystem using absolute addresses to access data, a part of the first level of the cache memory includes a translator for virtual or real addresses to absolute addresses. When requests are sent for a data fetch and the requested data are not resident in the first level of cache the request for data is delayed and may be forwarded to a lower level of said hierarchical memory, and a delayed request may result in cancellation of any process during a delayed request that has the ability to send back an exception. A delayed request may be rescinded if the central processor has reached an interruptible stage in its pipeline logic at which point, a false exception is forced clearing all I the wait states while the central processor ignores the false exception. Forcing of an exception occurs during dynamic address translation (DAT) or during access register translation (ART). A request for data signal to the storage subsystem cancellation is settable by the first hierarchical level of cache logic. A false exception signal to the first level cache is settable by the storage subsystem logic.

    摘要翻译: 中央处理器使用虚拟地址通过包括DAT和ART的高速缓存逻辑来访问数据,并且高速缓存逻辑使用绝对地址访问分层存储子系统中的数据来访问数据,高速缓冲存储器的第一级的一部分包括用于 虚拟或实际地址到绝对地址。 当请求被发送用于数据提取并且所请求的数据不驻留在高速缓存的第一级时,数据请求被延迟并且可以被转发到所述分层存储器的较低级别,并且延迟的请求可能导致任何 在具有发回异常的能力的延迟请求过程中。 如果中央处理器在其流水线逻辑中达到可中断阶段,则可能会撤销延迟请求,此时在中央处理器忽略错误异常时,强制清除所有I等待状态的错误异常。 动态地址转换(DAT)或访问寄存器转换(ART)期间发生异常的强制。 对存储子系统取消的数据信号的请求可以由高速缓存逻辑的第一层级设置。 存储子系统逻辑可以设置到第一级高速缓存的错误异常信号。

    Computer system deadlock request resolution using timed pulses
    17.
    发明授权
    Computer system deadlock request resolution using timed pulses 失效
    使用定时脉冲的计算机系统死锁请求分辨率

    公开(公告)号:US6151655A

    公开(公告)日:2000-11-21

    申请号:US70432

    申请日:1998-04-30

    CPC分类号: G06F9/524

    摘要: Disclosed is a hardware mechanism for detecting and avoiding potential deadlocks among requestors in a multiprocessor system, consisting of a plurality of CP's and I/O adapters connected to one or more shared storage controllers (SC's). Requests to each storage controller originate from external sources such as the CP's, the I/O adapters, and the other SC, as well as from internal sources, such as the hardware facilities used to process fetches and stores between the SC and main memory. All requests must be granted priority before beginning to execute, using a ranked priority order scheme. Specific sequences of requests may cause deadlocks, either due to high-priority requests using priority cycles and locking out low-priority requests, or as a result of requests of any priority level busying resources needed for the completion of other requests. The deadlock resolution mechanism described here monitors the length of time a request has been valid in the storage controller without completing, by checking the request register valid bits and utilizing a timed pulse, which is a subset of the pulse used to detect hangs within the SC. If the valid bit for a request register is on, and two timed pulses are received, an internal hang detect latch is set. If the valid bit is reset at any time, the detection logic and the internal hang detect latch are reset. When the internal hang detect latch is set, requests in progress are allowed to complete, and new requests are held in an inactive state, until the request which detected the internal hang is able to complete.

    摘要翻译: 公开了一种用于检测和避免多处理器系统中的请求者之间潜在的死锁的硬件机制,其由连接到一个或多个共享存储控制器(SC)的多个CP和I / O适配器组成。 对每个存储控制器的请求源自诸如CP,I / O适配器和其他SC的外部源,以及来自内部源(例如用于处理SC和主存储器之间的获取和存储的硬件设施)。 所有请求必须在开始执行之前被赋予优先级,使用排名优先顺序方案。 特定的请求序列可能会由于高优先权请求使用优先级周期和锁定低优先级请求而导致死锁,或者由于完成其他请求所需的任何优先级级别的资源请求而产生死锁。 这里描述的死锁解析机制通过检查请求寄存器有效位并利用定时脉冲来监视存储控制器中的请求已经有效的时间长度,并且使用定时脉冲,定时脉冲是用于检测SC内的挂起的脉冲的子集 。 如果请求寄存器的有效位为开,并且接收到两个定时脉冲,则设置内部挂起检测锁存器。 如果有效位在任何时候被复位,则检测逻辑和内部挂起检测锁存器被复位。 当设置内部挂起检测锁存器时,允许正在进行的请求完成,并且新的请求保持在非活动状态,直到检测到内部挂起的请求能够完成为止。

    System serialization with early release of individual processor
    18.
    发明授权
    System serialization with early release of individual processor 失效
    系统串行化与早期发布的单个处理器

    公开(公告)号:US6119219A

    公开(公告)日:2000-09-12

    申请号:US70595

    申请日:1998-04-30

    摘要: A pipelined multiprocessor system for ESA/390 operations which executes a simple instruction set in a hardware controlled execution unit and executes a complex instruction set in a milli-mode architected state with a millicode sequence of simple instructions in the hardware controlled execution unit, comprising a plurality of CPU processors each of which is part of said multiprocessing system and capable of generating and responding to a quiesce request, and controls for system operations which allow the CPUs in the ESA/390 system to process the local buffer update portion of IPTE and SSKE operations without waiting for all other processors to reach an interruptible point, and then to continue program execution with minor temporary restrictions on operations until the IPTE or SSKE operation is globally completed. In addition, Licensed Internal Code (LIC) sequences are defined which allow these IPTE and SSKE operations to co-exist with other operations which require conventional system quiescing (i.e. all processors must pause together), and to allow for CPU retry actions on any of the CPUs in the system at any point in the operation.

    摘要翻译: 一种用于ESA / 390操作的流水线多处理器系统,其执行硬件控制执行单元中的简单指令集,并且以硬模式设计状态以硬计算执行单元中的简单指令的毫位序列执行复指令集,包括 多个CPU处理器,每个CPU处理器都是所述多处理系统的一部分并且能够产生和响应静默请求,并且控制允许ESA / 390系统中的CPU处理IPTE和SSKE的本地缓冲器更新部分的系统操作 操作,而不等待所有其他处理器到达可中断点,然后继续执行程序,对操作进行轻微的临时限制,直到IPTE或SSKE操作全局完成。 此外,定义了许可内码(LIC)序列,允许这些IPTE和SSKE操作与需要常规系统静止的其他操作(即,所有处理器必须暂停在一起)并存,并允许对任何 CPU在系统中的任何一点操作。

    Method of resolving deadlocks between competing requests in a
multiprocessor using global hang pulse logic
    19.
    发明授权
    Method of resolving deadlocks between competing requests in a multiprocessor using global hang pulse logic 失效
    使用全局挂起脉冲逻辑解决多处理器中竞争请求之间的死锁的方法

    公开(公告)号:US6073182A

    公开(公告)日:2000-06-06

    申请号:US70664

    申请日:1998-04-30

    CPC分类号: G06F13/1663

    摘要: A method using a global hang pulse logic mechanism detects and resolves deadlocks among requesters to the storage controller of a symmetric multiprocessor system in which multiple central processors and I/O adapters are connected to one or more shared storage controllers. Deadlocks may occur in such a system due to specific sequences of requests, either because high priority requests use priority cycles and lock out low priority requests, or because requests of any priority level make resources needed for the completion of other requests too busy. The mechanism logic monitors the length of time a request has been valid in the storage controller without completing, by checking the request register valid bits, and by utilizing a timed pulse which is a subset of the pulse used to detect hangs within the storage controller. If the valid bit is reset at any time detection logic and an internal hang detect latch is set, Logic which allows requests in progress to complete, and holds new requests in an inactive state is activated when the internal hang latch is set and remains active until the request which detected the internal hang is able to complete, thus resetting the internal hang detect latch.

    摘要翻译: 使用全局挂起脉冲逻辑机制的方法检测并解决请求者之间的死锁到其中多个中央处理器和I / O适配器连接到一个或多个共享存储控制器的对称多处理器系统的存储控制器。 由于高优先级请求使用优先级周期并锁定低优先级请求,或者由于任何优先级的请求使得完成其他请求太繁忙所需的资源,因此在这种系统中可能会由于特定的请求序列而在这样的系统中发生死锁。 机制逻辑监视存储控制器中的请求已经有效的时间长度,而不需要通过检查请求寄存器有效位以及利用作为用于检测存储控制器内的挂起的脉冲的子集的定时脉冲来完成。 如果有效位在任何时候被复位,则检测逻辑和内部挂起检测锁存器被置位,当内部挂起锁存器被设置并且保持有效时,激活允许正在进行中的请求完成并保持处于非活动状态的新请求的逻辑,直到 检测到内部挂起的请求能够完成,从而重置内部挂起检测锁存器。

    Method, system and computer program product for data buffers partitioned from a cache array
    20.
    发明授权
    Method, system and computer program product for data buffers partitioned from a cache array 有权
    从缓存阵列分区的数据缓冲区的方法,系统和计算机程序产品

    公开(公告)号:US08250305B2

    公开(公告)日:2012-08-21

    申请号:US12051244

    申请日:2008-03-19

    IPC分类号: G06F13/00 G06F13/28

    CPC分类号: G06F12/126 G06F2212/2515

    摘要: Systems, methods and computer program products for data buffers partitioned from a cache array. An exemplary embodiment includes a method in a processor and for providing data buffers partitioned from a cache array, the method including clearing cache directories associated with the processor to an initial state, obtaining a selected directory state from a control register preloaded by the service processor, in response to the control register including the desired cache state, sending load commands with an address and data, loading cache lines and cache line directory entries into the cache and storing the specified data in the corresponding cache line.

    摘要翻译: 从缓存阵列分区的数据缓冲区的系统,方法和计算机程序产品。 示例性实施例包括处理器中的方法并且用于提供从高速缓存阵列分区的数据缓冲器,该方法包括将与处理器相关联的高速缓存目录清除到初始状态,从由服务处理器预加载的控制寄存器获得所选择的目录状态, 响应于包括所需缓存状态的控制寄存器,发送具有地址和数据的加载命令,将高速缓存行和高速缓存行目录条目加载到高速缓存中并将指定的数据存储在相应的高速缓存行中。