System for controlling access to external cache memories of differing size
    1.
    发明授权
    System for controlling access to external cache memories of differing size 失效
    用于控制访问不同大小的外部高速缓冲存储器的系统

    公开(公告)号:US06604173B1

    公开(公告)日:2003-08-05

    申请号:US08560227

    申请日:1995-11-21

    IPC分类号: G06F1204

    摘要: A method for controlling access to at least one external cache memory in a processing system, the at least one external cache memory having a number of lines of data and a number of bytes per line of data, the method includes determining a smallest cache memory size for use in the at least one external cache memory, and configuring a tag array of the at least one external cache memory to support the smallest determined cache memory size. A system for controlling access to at least one external cache memory in a processing system, the at least one external cache memory having a number of lines of data and a number of bytes per line of data, includes a circuit for configuring each tag field of a plurality of tag fields in a tag array in the at least one external cache memory to have a number of bits sufficient to support a smallest determined cache memory, and utilizing each tag field to determine whether data being accessed resides in the at least one external cache memory.

    摘要翻译: 一种用于控制对处理系统中的至少一个外部高速缓冲存储器的访问的方法,所述至少一个外部高速缓冲存储器具有数据行数和每行数据的字节数,所述方法包括确定最小高速缓存存储器大小 用于至少一个外部高速缓冲存储器,以及配置所述至少一个外部高速缓冲存储器的标签阵列以支持最小确定的高速缓存存储器大小。 一种用于控制对处理系统中的至少一个外部高速缓存存储器的访问的系统,所述至少一个外部高速缓冲存储器具有数据行数和每行数据的字节数,包括用于配置每个标签字段的电路 所述至少一个外部高速缓冲存储器中的标签阵列中的多个标签字段具有足以支持最小确定的高速缓存存储器的数量的位,并且利用每个标签字段来确定被访问的数据是否驻留在所述至少一个外部 高速缓存存储器。

    Shared L2 support for inclusion property in split L1 data and
instruction caches
    2.
    发明授权
    Shared L2 support for inclusion property in split L1 data and instruction caches 失效
    在分割L1数据和指令高速缓存中共享L2支持包含属性

    公开(公告)号:US5694573A

    公开(公告)日:1997-12-02

    申请号:US781922

    申请日:1996-12-30

    IPC分类号: G06F12/08

    CPC分类号: G06F12/0811 G06F12/0848

    摘要: A multi-processor data processing system has a multi-level cache wherein each processor has a split high level (e.g., level one or L1) cache composed of a data cache (DCache) and an instruction cache (ICache). A shared lower level (e.g., level two or L2) cache includes a cache array which is a superset of the cache lines in all L1 caches. There is a directory of L2 cache lines such that each line has a set of inclusion bits indicating if the line is residing in any of the L1 caches. A directory management system requires only N+2 inclusion bits per L2 line, where N is the number of processors having L1 caches sharing the L2 cache.

    摘要翻译: 多处理器数据处理系统具有多级缓存,其中每个处理器具有由数据高速缓存(DCache)和指令高速缓存(ICache)组成的分离高级(例如,一级或一级)高速缓存。 共享较低级别(例如,二级或二级)高速缓存包括高速缓存阵列,其是所有L1高速缓存中的高速缓存行的超集。 存在L2高速缓存行的目录,使得每行具有一组包含位,指示该行是否驻留在任何L1高速缓存中。 目录管理系统每L2线只需要N + 2个包含位,其中N是共享L2高速缓存的具有L1高速缓存的处理器的数量。

    High performance/low cost access hazard detection in pipelined cache
controller using comparators with a width shorter than and independent
of total width of memory address
    3.
    发明授权
    High performance/low cost access hazard detection in pipelined cache controller using comparators with a width shorter than and independent of total width of memory address 失效
    流水线高速缓存控制器中的高性能/低成本访问危害检测器使用宽度短于内存地址总宽度的比较器

    公开(公告)号:US5692151A

    公开(公告)日:1997-11-25

    申请号:US337715

    申请日:1994-11-14

    IPC分类号: G06F12/08

    CPC分类号: G06F12/0895 G06F12/0855

    摘要: An access hazard detection technique in a pipelined cache controller sustains high throughput in a frequently accessed cache but without the cost normally associated with such access hazard detection. If a previous request (request in the pipeline stages other than the first stage) has already resulted in a cache hit, and it matches the new request in both the Congruence Class Index and the Set Index fields and if the new request is also a hit, the address collision logic will signal a positive detection. This scheme makes use of the fact that (1) the hit condition, (2) the identical Congruence Class Index, and (3) the Set Index of two requests are sufficient to determine that they are referencing the same cache content. Implementation of this scheme results in a significant hardware saving and a significant performance boost.

    摘要翻译: 流水线高速缓存控制器中的访问危险检测技术在经常访问的缓存中维持高吞吐量,但是没有通常与这种访问危险检测相关联的成本。 如果先前的请求(在第一阶段以外的流水线阶段中的请求)已经导致缓存命中,并且它与Congruence类索引和Set Index字段中的新请求匹配,并且如果新的请求也是命中的 ,地址冲突逻辑将发出正向检测。 该方案利用以下事实:(1)命中条件,(2)相同的同余类索引,以及(3)两个请求的设置索引足以确定它们引用相同的缓存内容。 该方案的实现导致显着的硬件节省和显着的性能提升。

    Hierarchical cache arrangement wherein the replacement of an LRU entry
in a second level cache is prevented when the cache entry is the only
inclusive entry in the first level cache
    4.
    发明授权
    Hierarchical cache arrangement wherein the replacement of an LRU entry in a second level cache is prevented when the cache entry is the only inclusive entry in the first level cache 失效
    分层缓存布置,其中当高速缓存条目是第一级高速缓存中的唯一包含条目时,防止在第二级高速缓存中的LRU条目的替换

    公开(公告)号:US5584013A

    公开(公告)日:1996-12-10

    申请号:US353010

    申请日:1994-12-09

    IPC分类号: G06F12/08 G06F12/12 G06F13/00

    CPC分类号: G06F12/0811

    摘要: The present invention provides balanced cache performance in a data processing system. The data processing system includes a first processor, a second processor, a first cache memory, a second memory and a control circuit. The first processor is connected to the first cache memory, which serves as a first level cache for the first processor. The second processor and the first cache memory are connected to the second cache memory, which serves as a second level cache for the first processor and as a first level cache for the second processor. Replacement of a set in the second cache memory results in the set being invalidated in the first cache memory. The control circuit is connected to the second level cache and prevents replacing from a second level cache congruence class all sets that are in the first cache.

    摘要翻译: 本发明在数据处理系统中提供平衡的缓存性能。 数据处理系统包括第一处理器,第二处理器,第一高速缓冲存储器,第二存储器和控制电路。 第一处理器连接到第一高速缓存存储器,其用作第一处理器的第一级缓存。 第二处理器和第一高速缓存存储器连接到第二高速缓冲存储器,第二高速缓冲存储器用作第一处理器的第二级高速缓冲存储器,以及用作第二处理器的第一级缓存器。 替换第二高速缓存存储器中的集合导致该集合在第一高速缓冲存储器中无效。 控制电路连接到第二级高速缓存并且防止从第二级高速缓存一致类替换处于第一高速缓存中的所有集合。

    Multiprocessor system with shared cache and data input/output circuitry
for transferring data amount greater than system bus capacity
    5.
    发明授权
    Multiprocessor system with shared cache and data input/output circuitry for transferring data amount greater than system bus capacity 失效
    具有共享缓存和数据输入/输出电路的多处理器系统,用于传输数据量大于系统总线容量

    公开(公告)号:US5581734A

    公开(公告)日:1996-12-03

    申请号:US101144

    申请日:1993-08-02

    IPC分类号: G06F12/08 G06F15/167

    摘要: A high performance shared cache is provided to support multiprocessor systems and allow maximum parallelism in accessing the cache by the processors, servicing one processor request in each machine cycle, reducing system response time and increasing system throughput. The shared cache of the present invention uses the additional performance optimization techniques of pipelining cache operations (loads and stores) and burst-mode data accesses. By including built-in pipeline stages, the cache is enabled to service one request every machine cycle from any processing element. This contributes to reduction in the system response time as well as the throughput. With regard to the burst-mode data accesses, the widest possible data out of the cache can be stored to, and retrieved from, the cache by one cache access operation. One portion of the data is held in logic in the cache (on the chip), while another portion (corresponding to the system bus width) gets transferred to the requesting element (processor or memory) in one cycle. The held portion of the data can then be transferred in the following machine cycle.

    摘要翻译: 提供高性能共享缓存以支持多处理器系统,并允许处理器访问缓存的最大并行性,在每个机器周期中服务一个处理器请求,减少系统响应时间并提高系统吞吐量。 本发明的共享缓存使用流水线高速缓存操作(加载和存储)和突发模式数据访问的附加性能优化技术。 通过包括内置的流水线阶段,缓存可以从每个机器周期从任何处理元素服务一个请求。 这有助于减少系统响应时间以及吞吐量。 关于突发模式数据访问,可以通过一次高速缓存访​​问操作将高速缓存中的尽可能多的数据存储到高速缓冲存储器中并从高速缓存中检索出来。 数据的一部分保存在高速缓存(芯片上)的逻辑中,而另一部分(对应于系统总线宽度)在一个周期内被传送到请求元件(处理器或存储器)。 然后可以在以下机器周期中传送保存的数据部分。

    Method and systems for executing load instructions that achieve sequential load consistency
    6.
    发明授权
    Method and systems for executing load instructions that achieve sequential load consistency 失效
    执行负载指令的方法和系统,以实现连续的负载一致性

    公开(公告)号:US07376816B2

    公开(公告)日:2008-05-20

    申请号:US10988310

    申请日:2004-11-12

    IPC分类号: G06F9/30 G06F9/40 G06F15/00

    CPC分类号: G06F9/383 G06F12/0855

    摘要: A method is disclosed for executing a load instruction. Address information of the load instruction is used to generate an address of needed data, and the address is used to search a cache memory for the needed data. If the needed data is found in the cache memory, a cache hit signal is generated. At least a portion of the address is used to search a queue for a previous load instruction specifying the same address. If a previous load instruction specifying the same address is found, the cache hit signal is ignored and the load instruction is stored in the queue. A load/store unit, and a processor implementing the method, are also described.

    摘要翻译: 公开了一种用于执行加载指令的方法。 加载指令的地址信息用于生成所需数据的地址,该地址用于搜索所需数据的高速缓冲存储器。 如果在高速缓冲存储器中找到所需的数据,则产生高速缓存命中信号。 地址的至少一部分用于在队列中搜索指定相同地址的先前加载指令。 如果找到指定相同地址的先前加载指令,则忽略缓存命中信号,并将加载指令存储在队列中。 还描述了加载/存储单元和实现该方法的处理器。

    Systems for executing load instructions that achieve sequential load consistency
    7.
    发明授权
    Systems for executing load instructions that achieve sequential load consistency 失效
    用于执行实现顺序负载一致性的加载指令的系统

    公开(公告)号:US07730290B2

    公开(公告)日:2010-06-01

    申请号:US12036992

    申请日:2008-02-25

    CPC分类号: G06F9/383 G06F12/0855

    摘要: A method is disclosed for executing a load instruction. Address information of the load instruction is used to generate an address of needed data, and the address is used to search a cache memory for the needed data. If the needed data is found in the cache memory, a cache hit signal is generated. At least a portion of the address is used to search a queue for a previous load instruction specifying the same address. If a previous load instruction specifying the same address is found, the cache hit signal is ignored and the load instruction is stored in the queue. A load/store unit, and a processor implementing the method, are also described.

    摘要翻译: 公开了一种用于执行加载指令的方法。 加载指令的地址信息用于生成所需数据的地址,该地址用于搜索所需数据的高速缓冲存储器。 如果在高速缓冲存储器中找到所需的数据,则产生高速缓存命中信号。 地址的至少一部分用于在队列中搜索指定相同地址的先前加载指令。 如果找到指定相同地址的先前加载指令,则忽略缓存命中信号,并将加载指令存储在队列中。 还描述了加载/存储单元和实现该方法的处理器。

    SYSTEMS FOR EXECUTING LOAD INSTRUCTIONS THAT ACHIEVE SEQUENTIAL LOAD CONSISTENCY
    8.
    发明申请
    SYSTEMS FOR EXECUTING LOAD INSTRUCTIONS THAT ACHIEVE SEQUENTIAL LOAD CONSISTENCY 失效
    执行顺序负载一致的负载指令系统

    公开(公告)号:US20080148017A1

    公开(公告)日:2008-06-19

    申请号:US12036992

    申请日:2008-02-25

    IPC分类号: G06F9/312

    CPC分类号: G06F9/383 G06F12/0855

    摘要: A method is disclosed for executing a load instruction. Address information of the load instruction is used to generate an address of needed data, and the address is used to search a cache memory for the needed data. If the needed data is found in the cache memory, a cache hit signal is generated. At least a portion of the address is used to search a queue for a previous load instruction specifying the same address. If a previous load instruction specifying the same address is found, the cache hit signal is ignored and the load instruction is stored in the queue. A load/store unit, and a processor implementing the method, are also described.

    摘要翻译: 公开了一种用于执行加载指令的方法。 加载指令的地址信息用于生成所需数据的地址,该地址用于搜索所需数据的高速缓冲存储器。 如果在高速缓冲存储器中找到所需的数据,则产生高速缓存命中信号。 地址的至少一部分用于在队列中搜索指定相同地址的先前加载指令。 如果找到指定相同地址的先前加载指令,则忽略缓存命中信号,并将加载指令存储在队列中。 还描述了加载/存储单元和实现该方法的处理器。

    Systems and methods for executing load instructions that avoid order violations
    9.
    发明授权
    Systems and methods for executing load instructions that avoid order violations 失效
    执行加载指令的系统和方法,以避免违规违规

    公开(公告)号:US07302527B2

    公开(公告)日:2007-11-27

    申请号:US10988284

    申请日:2004-11-12

    IPC分类号: G06F12/00

    摘要: Methods for executing load instructions are disclosed. In one method, a load instruction and corresponding thread information are received. Address information of the load instruction is used to generate an address of the needed data, and the address is used to search a cache memory for the needed data. If the needed data is found in the cache memory, a cache hit signal is generated. At least a portion of the address is used to search a queue for a previous load and/or store instruction specifying the same address. If such a previous load/store instruction is found, the thread information is used to determine if the previous load/store instruction is from the same thread. If the previous load/store instruction is from the same thread, the cache hit signal is ignored, and the load instruction is stored in the queue. A load/store unit is also described.

    摘要翻译: 公开了执行加载指令的方法。 在一种方法中,接收加载指令和对应的线程信息。 加载指令的地址信息用于生成所需数据的地址,并且该地址用于搜索所需数据的高速缓冲存储器。 如果在高速缓冲存储器中找到所需的数据,则产生高速缓存命中信号。 地址的至少一部分用于搜索队列以获得指定相同地址的先前加载和/或存储指令。 如果找到这样的先前加载/存储指令,则使用线程信息来确定先前的加载/存储指令是否来自同一线程。 如果先前的加载/存储指令来自同一个线程,则忽略缓存命中信号,并将加载指令存储在队列中。 还描述了加载/存储单元。