Disowning cache entries on aging out of the entry
    1.
    发明申请
    Disowning cache entries on aging out of the entry 失效
    在条目中老化的缓存条目不起作用

    公开(公告)号:US20070174554A1

    公开(公告)日:2007-07-26

    申请号:US11339196

    申请日:2006-01-25

    IPC分类号: G06F12/00

    摘要: Caching where portions of data are stored in slower main memory and are transferred to faster memory between one or more processors and the main memory. The cache is such that an individual cache system must communicate to other associated cache systems, or check with such cache systems, to determine if they contain a copy of a given cached location prior to or upon modification or appropriation of data at a given cached location. The cache further includes provisions for determining when the data stored in a particular memory location may be replaced.

    摘要翻译: 缓存数据的一部分存储在较慢的主存储器中,并被传送到一个或多个处理器与主存储器之间的更快的存储器。 高速缓存使得单个高速缓存系统必须与其他相关联的高速缓存系统通信,或者与这种高速缓存系统进行检查,以确定它们是否在给定的高速缓存位置修改或占用数据之前或之后包含给定缓存位置的副本 。 高速缓存还包括用于确定何时可以替换存储在特定存储器位置中的数据的规定。

    History based line install
    2.
    发明申请
    History based line install 审中-公开
    基于历史的线路安装

    公开(公告)号:US20070180193A1

    公开(公告)日:2007-08-02

    申请号:US11342993

    申请日:2006-01-30

    IPC分类号: G06F12/00

    摘要: Using local change bit to direct the install state of the data line. A multi-processor system that having a plurality of individual processors where each of the processors has an associated L1 cache, and the multi-processor system has at least one shared main memory, and at least one shared L2 cache. The method described herein involves writing a data line into an L2 cache comprising and a local change bit to direct the install state of the data line.

    摘要翻译: 使用本地更改位来指示数据行的安装状态。 一种具有多个单独处理器的多处理器系统,其中每个处理器具有相关联的L1高速缓存,并且多处理器系统具有至少一个共享主存储器和至少一个共享L2高速缓存。 这里描述的方法涉及将数据线写入L2缓存和本地改变位以指导数据线的安装状态。

    Separate data/coherency caches in a shared memory multiprocessor system
    3.
    发明申请
    Separate data/coherency caches in a shared memory multiprocessor system 有权
    在共享内存多处理器系统中分离数据/一致性高速缓存

    公开(公告)号:US20070168619A1

    公开(公告)日:2007-07-19

    申请号:US11334280

    申请日:2006-01-18

    IPC分类号: G06F13/28 G06F12/00

    CPC分类号: G06F12/0824

    摘要: The system and method described herein is a dual system directory structure that performs the role of system cache, i.e., data, and system control, i.e., coherency. The system includes two system cache directories. These two cache directories are equal in size and collectively large enough to contain all of the processor cache directory entries, but with only one of these cache directories hosting system-cache data to back the most recent fraction of data accessed by the processors, and the other cache directory retains only addreses, including addresses of lines LRUed out and the processor using the data. By this expedient, only the directory known to be backed by system cached data will be evaluated for system cache data hits.

    摘要翻译: 这里描述的系统和方法是执行系统高速缓存(即数据)和系统控制(即一致性)的双重系统目录结构。 该系统包括两个系统缓存目录。 这两个缓存目录的大小相等,并且集体大到足以包含所有处理器高速缓存目录条目,但只有这些缓存目录中的一个托管系统高速缓存数据以支持由处理器访问的最近一小部分数据,而 其他缓存目录仅保留地址,包括LRUed out的行地址和使用该数据的处理器。 通过这种方式,只有系统缓存数据的已知备份的目录将被评估用于系统缓存数据命中。

    Method and apparatus for implementing a combined data/coherency cache
    4.
    发明申请
    Method and apparatus for implementing a combined data/coherency cache 有权
    用于实现组合数据/一致性高速缓存的方法和装置

    公开(公告)号:US20060184744A1

    公开(公告)日:2006-08-17

    申请号:US11056809

    申请日:2005-02-11

    IPC分类号: G06F13/28

    摘要: A method and apparatus for implementing a combined data/coherency cache for a shared memory multi-processor. The combined data/coherency cache includes a system cache with a number of entries. The method includes building a system cache directory with a number of entries equal to the number of entries of the system cache. The building includes designating a number of sub-entries for each entry which is determined by a number of sub-entries operable for performing system cache coherency functions. The building also includes providing a sub-entry logic designator for each entry, and mapping one of the sub-entries for each entry to the system cache via the sub-entry logic designator.

    摘要翻译: 一种用于实现用于共享存储器多处理器的组合数据/一致性高速缓存的方法和装置。 组合的数据/一致性缓存包括具有多个条目的系统高速缓存。 该方法包括构建具有等于系统高速缓存的条目数的条目数目的系统缓存目录。 该建筑物包括指定由可操作用于执行系统高速缓存一致性功能的多个子条目确定的每个条目的多个子条目。 该建筑还包括为每个条目提供子条目逻辑指示符,并通过子条目逻辑指示符将每个条目的子条目之一映射到系统高速缓存。

    Least recently used (LRU) compartment capture in a cache memory system
    5.
    发明授权
    Least recently used (LRU) compartment capture in a cache memory system 有权
    在缓存存储器系统中最近使用的(LRU)隔离区

    公开(公告)号:US08180970B2

    公开(公告)日:2012-05-15

    申请号:US12035906

    申请日:2008-02-22

    IPC分类号: G06F12/00 G06F13/00 G06F13/28

    CPC分类号: G06F12/123 G06F12/0859

    摘要: A two pipe pass method for least recently used (LRU) compartment capture in a multiprocessor system. The method includes receiving a fetch request via a requesting processor and accessing a cache directory based on the received fetch request, performing a first pipe pass by determining whether a fetch hit or a fetch miss has occurred in the cache directory, and determining an LRU compartment associated with a specified congruence class of the cache directory based on the fetch request received, when it is determined that a fetch miss has occurred, and performing a second pipe pass by using the LRU compartment determined and the specified congruence class to access the cache directory and to select an LRU address to be cast out of the cache directory.

    摘要翻译: 在多处理器系统中用于最近最少使用(LRU)隔室捕获的两个管道通过方法。 该方法包括:通过请求处理器接收提取请求,并基于接收的提取请求访问高速缓存目录;通过确定高速缓存目录中是否发生了提取命中或提取丢失,执行第一管道通路,以及确定LRU隔间 当确定已经发生提取未命中时,基于所接收的获取请求与缓存目录的指定同余类相关联,并且通过使用确定的LRU隔离区和指定的一致等级来访问高速缓存目录来执行第二管道传递 并选择要从缓存目录中抛出的LRU地址。

    Method and apparatus for parallel and serial data transfer
    6.
    发明授权
    Method and apparatus for parallel and serial data transfer 有权
    并行和串行数据传输的方法和装置

    公开(公告)号:US08122297B2

    公开(公告)日:2012-02-21

    申请号:US11874232

    申请日:2007-10-18

    IPC分类号: G06F11/00

    CPC分类号: G06F11/10

    摘要: A method and apparatus are disclosed for performing maintenance operations in a system using address, data, and controls which are transported through the system, allowing for parallel and serial operations to co-exist without the parallel operations being slowed down by the serial ones. It also provides for use of common shifters, engines, and protocols as well as efficient conversion of ECC to parity and parity to ECC as needed in the system. The invention also provides for error detection and isolation, both locally and in the reported status. The invention provides for large maintenance address and data spaces (typically 64 bits address and 64 bits data per address supported).

    摘要翻译: 公开了一种用于在通过系统传送的地址,数据和控制的系统中进行维护操作的方法和装置,允许并行和串行操作共存,而并行操作不被串行操作减慢。 它还提供了常用的移位器,引擎和协议的使用,以及系统中根据需要将ECC高效地转换为奇偶校验和奇偶校验。 本发明还提供了本地和报告状态的错误检测和隔离。 本发明提供大的维护地址和数据空间(通常支持64位地址和64位数据)。

    METHOD, SYSTEM AND COMPUTER PROGRAM PRODUCT FOR DATA BUFFERS PARTITIONED FROM A CACHE ARRAY
    7.
    发明申请
    METHOD, SYSTEM AND COMPUTER PROGRAM PRODUCT FOR DATA BUFFERS PARTITIONED FROM A CACHE ARRAY 有权
    方法,系统和计算机程序产品用于从缓存区域分配的数据缓冲区

    公开(公告)号:US20090240891A1

    公开(公告)日:2009-09-24

    申请号:US12051244

    申请日:2008-03-19

    IPC分类号: G06F12/00

    CPC分类号: G06F12/126 G06F2212/2515

    摘要: Systems, methods and computer program products for data buffers partitioned from a cache array. An exemplary embodiment includes a method in a processor and for providing data buffers partitioned from a cache array, the method including clearing cache directories associated with the processor to an initial state, obtaining a selected directory state from a control register preloaded by the service processor, in response to the control register including the desired cache state, sending load commands with an address and data, loading cache lines and cache line directory entries into the cache and storing the specified data in the corresponding cache line.

    摘要翻译: 从缓存阵列分区的数据缓冲区的系统,方法和计算机程序产品。 示例性实施例包括处理器中的方法并且用于提供从高速缓存阵列分区的数据缓冲器,该方法包括将与处理器相关联的高速缓存目录清除到初始状态,从由服务处理器预加载的控制寄存器获得所选择的目录状态, 响应于包括所需缓存状态的控制寄存器,发送具有地址和数据的加载命令,将高速缓存行和高速缓存行目录条目加载到高速缓存中并将指定的数据存储在相应的高速缓存行中。

    Coherency management for a
    8.
    发明申请
    Coherency management for a "switchless" distributed shared memory computer system 失效
    “无切换”分布式共享内存计算机系统的一致性管理

    公开(公告)号:US20060184750A1

    公开(公告)日:2006-08-17

    申请号:US11402599

    申请日:2006-04-12

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0813 G06F12/0831

    摘要: A shared memory symmetrical processing system including a plurality of nodes each having a system control element for routing internodal communications. A first ring and a second ring interconnect the plurality of nodes, wherein data in said first ring flows in opposite directions with respect to said second ring. A receiver receives a plurality of incoming messages via the first or second ring and merges a plurality of incoming message responses with a local outgoing message response to provide a merged response. Each of the plurality of nodes includes any combination of the following: at least one processor, cache memory, a plurality of I/O adapters, and main memory. The system control element includes a plurality of controllers for maintaining coherency in the system.

    摘要翻译: 一种共享存储器对称处理系统,包括多个节点,每个节点具有用于路由节点间通信的系统控制元件。 第一环和第二环互连多个节点,其中所述第一环中的数据相对于所述第二环相反的方向流动。 接收器经由第一或第二环接收多个传入消息,并将多个传入消息响应与本地传出消息响应合并以提供合并响应。 多个节点中的每一个包括以下的任何组合:至少一个处理器,高速缓冲存储器,多个I / O适配器和主存储器。 系统控制元件包括用于维持系统中一致性的多个控制器。

    Bus protocol for a switchless distributed shared memory computer system
    9.
    发明授权
    Bus protocol for a switchless distributed shared memory computer system 失效
    总线协议用于无交换分布式共享内存计算机系统

    公开(公告)号:US06988173B2

    公开(公告)日:2006-01-17

    申请号:US10435878

    申请日:2003-05-12

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0831 G06F12/0813

    摘要: A bus protocol is disclosed for a symmetric multiprocessing computer system consisting of a plurality of nodes, each of which contains a multitude of processors, I/O devices, main memory and a system controller comprising an integrated switch with a top level cache. The nodes are interconnected by a dual concentric ring topology. The bus protocol is used to exchange snoop requests and addresses, data, coherency information and operational status between nodes in a manner that allows partial coherency results to be passed in parallel with a snoop request and address as an operation is forwarded along each ring. Each node combines it's own coherency results with the partial coherency results it received prior to forwarding the snoop request, address and updated partial coherency results to the next node on the ring. The protocol allows each node in the system to see the final coherency results without requiring the requesting node to broadcast these results to all the other nodes in the system. The bus protocol also allows data to be returned on one of the two rings, with the ring selection determined by the relative placement of the source and destination nodes on each ring, in order to control latency and data bus utilization.

    摘要翻译: 公开了一种用于由多个节点组成的对称多处理计算机系统的总线协议,每个节点包含多个处理器,I / O设备,主存储器和包括具有顶级高速缓存的集成交换机的系统控制器。 节点通过双同心环拓扑互连。 总线协议用于以一种允许部分一致性结果与窥探请求和地址并行传送的方式来交换窥探请求和地址,数据,相关性信息和节点之间的操作状态,因为操作沿着每个环转发。 每个节点将其自身的一致性结果与在将窥探请求转发之前接收的部分一致性结果相结合,将地址和更新的部分一致性结果合并到环上的下一个节点。 该协议允许系统中的每个节点查看最终的一致性结果,而不需要请求节点将这些结果广播到系统中的所有其他节点。 总线协议还允许在两个振铃中的一个上返回数据,其中环选择由每个振铃上的源节点和目的节点的相对位置确定,以便控制等待时间和数据总线的利用。

    False exception for cancelled delayed requests
    10.
    发明授权
    False exception for cancelled delayed requests 失效
    取消延迟请求的假异常

    公开(公告)号:US06219758B1

    公开(公告)日:2001-04-17

    申请号:US09047579

    申请日:1998-03-25

    IPC分类号: G06F1200

    CPC分类号: G06F12/1054

    摘要: A central processor uses virtual addresses to access data via cache logic including a DAT and ART, and the cache logic accesses data in the hierarchical storage subsystem using absolute addresses to access data, a part of the first level of the cache memory includes a translator for virtual or real addresses to absolute addresses. When requests are sent for a data fetch and the requested data are not resident in the first level of cache the request for data is delayed and may be forwarded to a lower level of said hierarchical memory, and a delayed request may result in cancellation of any process during a delayed request that has the ability to send back an exception. A delayed request may be rescinded if the central processor has reached an interruptible stage in its pipeline logic at which point, a false exception is forced clearing all I the wait states while the central processor ignores the false exception. Forcing of an exception occurs during dynamic address translation (DAT) or during access register translation (ART). A request for data signal to the storage subsystem cancellation is settable by the first hierarchical level of cache logic. A false exception signal to the first level cache is settable by the storage subsystem logic.

    摘要翻译: 中央处理器使用虚拟地址通过包括DAT和ART的高速缓存逻辑来访问数据,并且高速缓存逻辑使用绝对地址访问分层存储子系统中的数据来访问数据,高速缓冲存储器的第一级的一部分包括用于 虚拟或实际地址到绝对地址。 当请求被发送用于数据提取并且所请求的数据不驻留在高速缓存的第一级时,数据请求被延迟并且可以被转发到所述分层存储器的较低级别,并且延迟的请求可能导致任何 在具有发回异常的能力的延迟请求过程中。 如果中央处理器在其流水线逻辑中达到可中断阶段,则可能会撤销延迟请求,此时在中央处理器忽略错误异常时,强制清除所有I等待状态的错误异常。 动态地址转换(DAT)或访问寄存器转换(ART)期间发生异常的强制。 对存储子系统取消的数据信号的请求可以由高速缓存逻辑的第一层级设置。 存储子系统逻辑可以设置到第一级高速缓存的错误异常信号。