DISOWNING CACHE ENTRIES ON AGING OUT OF THE ENTRY
    41.
    发明申请
    DISOWNING CACHE ENTRIES ON AGING OUT OF THE ENTRY 有权
    在进入时出现高速缓存

    公开(公告)号:US20100030965A1

    公开(公告)日:2010-02-04

    申请号:US12435468

    申请日:2009-05-05

    IPC分类号: G06F12/08 G06F12/00

    摘要: Caching where portions of data are stored in slower main memory and are transferred to faster memory between one or more processors and the main memory. The cache is such that an individual cache system must communicate to other associated cache systems, or check with such cache systems, to determine if they contain a copy of a given cached location prior to or upon modification or appropriation of data at a given cached location. The cache further includes provisions for determining when the data stored in a particular memory location may be replaced.

    摘要翻译: 缓存数据的一部分存储在较慢的主存储器中,并被传送到一个或多个处理器与主存储器之间的更快的存储器。 高速缓存使得单个高速缓存系统必须与其他相关联的高速缓存系统通信,或者与这种高速缓存系统进行检查,以确定它们是否在给定的高速缓存位置修改或占用数据之前或之后包含给定缓存位置的副本 。 高速缓存还包括用于确定何时可以替换存储在特定存储器位置中的数据的规定。

    Method and Apparatus for Parallel and Serial Data Transfer
    43.
    发明申请
    Method and Apparatus for Parallel and Serial Data Transfer 有权
    并行和串行数据传输的方法和装置

    公开(公告)号:US20090106588A1

    公开(公告)日:2009-04-23

    申请号:US11874232

    申请日:2007-10-18

    IPC分类号: G06F11/14 G06F9/46

    CPC分类号: G06F11/10

    摘要: A method and apparatus are disclosed for performing maintenance operations in a system using address, data, and controls which are transported through the system, allowing for parallel and serial operations to co-exist without the parallel operations being slowed down by the serial ones. It also provides for use of common shifters, engines, and protocols as well as efficient conversion of ECC to parity and parity to ECC as needed in the system. The invention also provides for error detection and isolation, both locally and in the reported status. The invention provides for large maintenance address and data spaces (typically 64 bits address and 64 bits data per address supported).

    摘要翻译: 公开了一种用于在通过系统传送的地址,数据和控制的系统中进行维护操作的方法和装置,允许并行和串行操作共存,而并行操作不被串行操作减慢。 它还提供了常用的移位器,引擎和协议的使用,以及系统中根据需要将ECC高效地转换为奇偶校验和奇偶校验。 本发明还提供了本地和报告状态的错误检测和隔离。 本发明提供大的维护地址和数据空间(通常支持64位地址和64位数据)。

    Storage System and Associated Methods
    44.
    发明申请
    Storage System and Associated Methods 失效
    存储系统及相关方法

    公开(公告)号:US20090083491A1

    公开(公告)日:2009-03-26

    申请号:US11861765

    申请日:2007-09-26

    IPC分类号: G06F12/08

    摘要: A storage system may include storage, a main pipeline to carry data for the storage, and a store pipeline to carry data for the storage. The storage system may also include a controller to prioritize data storage requests for the storage based upon available interleaves and which pipeline is associated with the data storage requests.

    摘要翻译: 存储系统可以包括存储器,用于存储用于存储数据的主流水线以及用于存储数据的存储流水线。 存储系统还可以包括控制器,用于基于可用的交织以及哪个流水线与数据存储请求相关联地对存储的数据存储请求进行优先级排序。

    Separate data and coherency cache directories in a shared cache in a multiprocessor system
    45.
    发明授权
    Separate data and coherency cache directories in a shared cache in a multiprocessor system 有权
    在多处理器系统中的共享缓存中分离数据和一致性缓存目录

    公开(公告)号:US07475193B2

    公开(公告)日:2009-01-06

    申请号:US11334280

    申请日:2006-01-18

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0824

    摘要: A dual system shared cache directory structure for a cache memory performs the role of an inclusive shared system cache, i.e., data, and system control, i.e., coherency. The system includes two separate system cache directories in the shared system cache. The two separate cache directories are substantially equal in size and collectively large enough to contain all of the processor cache directory entries, but with only one of these separate cache directories hosting system-cache data to back the most recent fraction of data accessed by the processors. The other cache directory retains only addresses, including addresses of lines LRUed out from the first cache directory and the identity of the processor using the data. Thus by this expedient, only the directory known to be backed by system cached data will be evaluated for system cache memory data.

    摘要翻译: 用于高速缓冲存储器的双系统共享缓存目录结构执行包含共享系统高速缓存的作用,即数据和系统控制,即一致性。 该系统在共享系统缓存中包括两个单独的系统缓存目录。 两个单独的高速缓存目录的大小基本上相等,并且总体上足够大以容纳所有处理器高速缓存目录条目,但是只有这些单独的高速缓存目录中的一个托管系统高速缓存数据以返回由处理器访问的最近几分数据 。 另一个缓存目录仅保留地址,包括从第一高速缓存目录LRU出来的行的地址以及使用该数据的处理器的身份。 因此,通过这个权宜之计,系统高速缓存存储器数据只会被评估为系统缓存数据所支持的目录。

    Computer system UE recovery logic
    46.
    发明授权
    Computer system UE recovery logic 失效
    计算机系统UE恢复逻辑

    公开(公告)号:US6163857A

    公开(公告)日:2000-12-19

    申请号:US70389

    申请日:1998-04-30

    IPC分类号: G06F11/10 G06F11/00

    CPC分类号: G06F11/1064

    摘要: A computer system having central processors (CPs), an associated L2 cache, and processor memory arrays (PMAs), is provided with store logic and and fetch logic used to detect and correct data errors and to write the resulting data the associated cache. The store logic and and fetch logic blocks UEs from the cache for CP stores, for PMA (mainstore) fetches/loads, and for cache-to-cache loads, and with uncorrectable error recovery cache fetch and store logic injects `Special UEs` into the cache when loads cannot be blocked and abends CP jobs for UEs during CP stores, for UEs from PMA, for UEs from remote cache, and for UEs from local cache. This logic performs reconfiguring of memory when UEs are detected in memory and also blocks cache data propagation on UEs for CP fetches, for Cache-to-Cache transfer if data is unchanged, and for PMA castouts if data is unchanged, as well as forces castouts when UEs appear on changed cache data; injects `Special UEs` for UEs detected on changed cache data; invalidates the cache when UEs are detected in the local cache; and only deletes cache entries that have repeated failures.

    摘要翻译: 具有中央处理器(CP),相关联的L2高速缓存和处理器存储器阵列(PMA)的计算机系统被提供有用于检测和校正数据错误并且将结果数据写入相关联的高速缓存的存储逻辑和提取逻辑。 存储逻辑和提取逻辑阻止来自用于CP的高速缓存的UE存储用于PMA(主仓)提取/加载以及用于高速缓存到高速缓存加载的UE,并且具有不可校正的错误恢复高速缓存提取和存储逻辑将“特殊UE”注入 在CP存储期间,对于来自PMA的UE,对于来自远程高速缓存的UE以及用于来自本地高速缓存的UE,不能阻止加载时的缓存并退出用于UE的CP作业。 当在存储器中检测到UE时,该逻辑执行存储器的重新配置,并且还阻止用于CP提取的UE上的高速缓存数据传播,如果数据不变,则用于缓存到缓存传输,以及如果数据不变,则阻止PMA丢弃,以及强制转移 当UE出现在更改的缓存数据上时; 针对在更改的缓存数据上检测到的UE注入“特殊UE”; 在本地高速缓存中检测到UE时使高速缓存失效; 并且只删除重复出现故障的缓存条目。

    Computer architecture incorporating processor clusters and hierarchical
cache memories
    47.
    发明授权
    Computer architecture incorporating processor clusters and hierarchical cache memories 失效
    包含处理器集群和分层缓存存储器的计算机体系结构

    公开(公告)号:US5752264A

    公开(公告)日:1998-05-12

    申请号:US698192

    申请日:1996-08-15

    IPC分类号: G06F12/08

    CPC分类号: G06F12/0811 G06F12/084

    摘要: A hierarchical cache architecture that reduces traffic on a main memory bus while overcoming the disadvantages of prior systems. The architecture includes a plurality of level one caches that are of the store through type, each level one cache is associated with a processor and may be incorporated into the processor. Subsets (or "clusters") of processors, along with their associated level one caches, are formed and a level two cache is provided for each cluster. Each processor-level one cache pair within a cluster is coupled to the cluster's level two cache through a dedicated bus. By configuring the processors and caches in this manner, not only is the speed advantage normally associated with the use of cache memory realized, but the number of memory bus accesses is reduced without the disadvantages associated with the use of store in type caches at level one and without the disadvantages associated with the use of a shared cache bus.

    摘要翻译: 分层缓存体系结构,可克服现有系统的缺点,减少主存储器总线上的流量。 该架构包括通过类型存储的多个一级缓存,每个级别一个高速缓存与处理器相关联,并且可以并入处理器。 形成处理器的子集(或“集群”)及其关联的一级高速缓存,并为每个集群提供二级高速缓存。 集群内每个处理器级的一个缓存对通过专用总线耦合到集群的二级缓存。 通过以这种方式配置处理器和高速缓存,不仅速度优势通常与实现的高速缓冲存储器的使用相关联,而且减少了存储器总线访问的数量,而没有与在一级的高速缓存中使用存储相关联的缺点 并且没有与使用共享高速缓存总线相关联的缺点。

    Cache coherency protocol for allowing parallel data fetches and eviction to the same addressable index
    48.
    发明授权
    Cache coherency protocol for allowing parallel data fetches and eviction to the same addressable index 有权
    缓存一致性协议,用于允许并行数据提取和迁出到相同的可寻址索引

    公开(公告)号:US09003125B2

    公开(公告)日:2015-04-07

    申请号:US13523535

    申请日:2012-06-14

    IPC分类号: G06F12/08

    摘要: A technique for cache coherency is provided. A cache controller selects a first set from multiple sets in a congruence class based on a cache miss for a first transaction, and places a lock on the entire congruence class in which the lock prevents other transactions from accessing the congruence class. The cache controller designates in a cache directory the first set with a marked bit indicating that the first transaction is working on the first set, and the marked bit for the first set prevents the other transactions from accessing the first set within the congruence class. The cache controller removes the lock on the congruence class based on the marked bit being designated for the first set, and resets the marked bit for the first set to an unmarked bit based on the first transaction completing work on the first set in the congruence class.

    摘要翻译: 提供了高速缓存一致性技术。 高速缓存控制器基于第一事务的高速缓存未命中从一个等同类中的多个集合中选择第一集合,并且将锁定放置在整个一致类中,其中锁定防止其他事务访问同余类。 高速缓存控制器在高速缓存目录中指定具有指示第一事务在第一集合上工作的标记位的第一集合,并且第一集合的标记位阻止其他事务访问同余类中的第一集合。 高速缓存控制器基于为第一组指定的标记位移除同余类上的锁,并且基于在一致类中的第一集合上的第一次交易完成工作将第一组的标记位重置为未标记位 。

    CACHE COHERENCY PROTOCOL FOR ALLOWING PARALLEL DATA FETCHES AND EVICTION TO THE SAME ADDRESSABLE INDEX
    49.
    发明申请
    CACHE COHERENCY PROTOCOL FOR ALLOWING PARALLEL DATA FETCHES AND EVICTION TO THE SAME ADDRESSABLE INDEX 有权
    用于允许并行数据存储器的缓存协议和相同可寻址索引的错误

    公开(公告)号:US20130339622A1

    公开(公告)日:2013-12-19

    申请号:US13523535

    申请日:2012-06-14

    IPC分类号: G06F12/08

    摘要: A technique for cache coherency is provided. A cache controller selects a first set from multiple sets in a congruence class based on a cache miss for a first transaction, and places a lock on the entire congruence class in which the lock prevents other transactions from accessing the congruence class. The cache controller designates in a cache directory the first set with a marked bit indicating that the first transaction is working on the first set, and the marked bit for the first set prevents the other transactions from accessing the first set within the congruence class. The cache controller removes the lock on the congruence class based on the marked bit being designated for the first set, and resets the marked bit for the first set to an unmarked bit based on the first transaction completing work on the first set in the congruence class.

    摘要翻译: 提供了高速缓存一致性技术。 高速缓存控制器基于第一事务的高速缓存未命中从一个等同类中的多个集合中选择第一集合,并且将锁定放置在整个一致类中,其中锁定防止其他事务访问同余类。 高速缓存控制器在高速缓存目录中指定具有指示第一事务在第一集合上工作的标记位的第一集合,并且第一集合的标记位阻止其他事务访问同余类中的第一集合。 高速缓存控制器基于为第一组指定的标记位移除同余类上的锁,并且基于在一致类中的第一集合上的第一次交易完成工作将第一组的标记位重置为未标记位 。

    EDRAM macro disablement in cache memory
    50.
    发明授权
    EDRAM macro disablement in cache memory 失效
    缓存中的EDRAM宏禁用

    公开(公告)号:US08381019B2

    公开(公告)日:2013-02-19

    申请号:US12822367

    申请日:2010-06-24

    IPC分类号: G06F11/00

    摘要: Embedded dynamic random access memory (EDRAM) macro disablement in a cache memory includes isolating an EDRAM macro of a cache memory bank, the cache memory bank being divided into at least three rows of a plurality of EDRAM macros, the EDRAM macro being associated with one of the at least three rows, iteratively testing each line of the EDRAM macro, the testing including attempting at least one write operation at each line of the EDRAM macro, determining if an error occurred during the testing, and disabling write operations for an entire row of EDRAM macros associated with the EDRAM macro based on the determining.

    摘要翻译: 高速缓冲存储器中的嵌入式动态随机存取存储器(EDRAM)宏禁用包括隔离高速缓存存储体的EDRAM宏,高速缓存存储体被划分成多个EDRAM宏的至少三行,所述EDRAM宏与一个 至少三行,迭代地测试EDRAM宏的每一行,测试包括尝试在EDRAM宏的每一行进行至少一次写操作,确定在测试期间是否发生错误,以及禁止整行的写操作 的基于确定的EDRAM宏相关联的EDRAM宏。