System and method utilizing speculative cache access for improved performance

    公开(公告)号:US06647464B2

    公开(公告)日:2003-11-11

    申请号:US09507546

    申请日:2000-02-18

    IPC分类号: G06F1200

    CPC分类号: G06F12/0855

    摘要: A system and method are disclosed which provide a cache structure that allows early access to the cache structure's data. A cache design is disclosed that, in response to receiving a memory access request, begins an access to a cache level's data before a determination has been made as to whether a true hit has been achieved for such cache level. That is, a cache design is disclosed that enables cache data to be speculatively accessed before a determination is made as to whether a memory address required to satisfy a received memory access request is truly present in the cache. In a preferred embodiment, the cache is implemented to make a determination as to whether a memory address required to satisfy a received memory access request is truly present in the cache structure (i.e., whether a “true” cache hit is achieved). Although, such a determination is not made before the cache data begins to be accessed. Rather, in a preferred embodiment, a determination of whether a true cache hit is achieved in the cache structure is performed in parallel with the access of the cache structure's data. Therefore, a preferred embodiment implements a parallel path by beginning the cache data access while a determination is being made as to whether a true cache hit has been achieved. Thus, the cache data is retrieved early from the cache structure and is available in a timely manner for use by a requesting execution unit.

    Multiple issue algorithm with over subscription avoidance feature to get high bandwidth through cache pipeline
    2.
    发明授权
    Multiple issue algorithm with over subscription avoidance feature to get high bandwidth through cache pipeline 有权
    具有超订阅避免功能的多问题算法通过缓存管道获得高带宽

    公开(公告)号:US06427189B1

    公开(公告)日:2002-07-30

    申请号:US09510973

    申请日:2000-02-21

    IPC分类号: G06F1300

    CPC分类号: G06F12/0846 G06F12/0897

    摘要: A multi-level cache structure and associated method of operating the cache structure are disclosed. The cache structure uses a queue for holding address information for a plurality of memory access requests as a plurality of entries. The queue includes issuing logic for determining which entries should be issued. The issuing logic further comprises find first logic for determining which entries meet a predetermined criteria and selecting a plurality of those entries as issuing entries. The issuing logic also comprises lost logic that delays the issuing of a selected entry for a predetermined time period based upon a delay criteria. The delay criteria may, for example, comprise a conflict between issuing resources, such as ports. Thus, in response to an issuing entry being oversubscribed, the issuing of such entry may be delayed for a predetermined time period (e.g., one clock cycle) to allow the resource conflict to clear.

    摘要翻译: 公开了一种操作高速缓存结构的多级缓存结构和相关联的方法。 高速缓存结构使用用于将多个存储器访问请求的地址信息保存为多个条目的队列。 队列包括用于确定应该发出哪些条目的发布逻辑。 发布逻辑还包括找到用于确定哪些条目符合预定标准的第一逻辑,并且选择多个这些条目作为发行条目。 发布逻辑还包括基于延迟准则延迟所选条目发布预定时间段的丢失逻辑。 延迟标准可以例如包括发布诸如端口的资源之间的冲突。 因此,响应于超额认购的发行条目,这样的条目的发布可以延迟预定时间段(例如,一个时钟周期),以允许资源冲突清除。

    L1 cache memory
    3.
    发明授权
    L1 cache memory 有权
    L1高速缓存

    公开(公告)号:US06507892B1

    公开(公告)日:2003-01-14

    申请号:US09510285

    申请日:2000-02-21

    IPC分类号: G06F1300

    CPC分类号: G06F12/0857 G06F12/0831

    摘要: The inventive cache processes multiple access requests simultaneously by using separate queuing structures for data and instructions. The inventive cache uses ordering mechanisms that guarantee program order when there are address conflicts and architectural ordering requirements. The queuing structures are snoopable by other processors of a multiprocessor system. The inventive cache has a tag access bypass around the queuing structures, to allow for speculative checking by other levels of cache and for lower latency if the queues are empty. The inventive cache allows for at least four accesses to be processed simultaneously. The results of the access can be sent to multiple consumers. The multiported nature of the inventive cache allows for a very high bandwidth to be processed through this cache with a low latency.

    摘要翻译: 本发明的高速缓存通过使用用于数据和指令的单独的排队结构同时处理多个访问请求。 本发明的高速缓存使用排序机制,当存在地址冲突和架构排序要求时,保证程序顺序。 排队结构可以被多处理器系统的其他处理器窥探。 本发明的高速缓存具有围绕排队结构的标签访问绕过,以允许其他级别的高速缓存的推测性检查以及如果队列为空,则延迟较低。 本发明的缓存允许同时处理至少四个访问。 访问的结果可以发送给多个消费者。 本发明的高速缓存的多端口性质允许通过具有低等待时间的该缓存来处理非常高的带宽。

    Carry look-ahead for bi-endian adder
    4.
    发明授权
    Carry look-ahead for bi-endian adder 有权
    携带前瞻的双端加法器

    公开(公告)号:US06470374B1

    公开(公告)日:2002-10-22

    申请号:US09510129

    申请日:2000-02-21

    IPC分类号: G06F750

    摘要: The inventive adder can perform carry look-ahead calculations for a bi-endian adder in a cache memory system. The adder can add one of +/−1, 4, 8, or 16 to a loaded value from memory, and the operation can be a 4 or 8 byte add. The inventive adder comprises a plurality of byte adder cells and carry look-ahead (CLA) logic. The adder cells determine which of themselves is the least significant bit (LSB) byte adder cell. The LSB cell then adds one of the increment values to its loaded value. The other cells add 0x00 or 0xFF, depending upon the sign of the increment value, to a loaded value from memory. Each adder performs two adds, one for a carry-in of 0, and the other for a carry in of 1. Both results are sent to a MUX. The CLA logic determines each of the carries, and provides a selection control signal to each MUX. of the different cells.

    摘要翻译: 本发明的加法器可以对高速缓冲存储器系统中的双端加法器进行执行预读计算。 加法器可以将+/- 1,4,8或16中的一个添加到存储器的加载值,并且操作可以是4或8字节加法。 本发明的加法器包括多个字节加法器单元和携带预读(CLA)逻辑。 加法器单元确定它们中的哪一个是最低有效位(LSB)字节加法器单元。 LSB单元然后将其中一个增量值添加到其加载值。 其他单元格根据增量值的符号将0x00或0xFF添加到内存中的加载值。 每个加法器执行两个加法,一个用于进位为0,另一个用于进位在1中。两个结果都发送到MUX。 CLA逻辑确定每个载波,并且向每个MUX提供选择控制信号。 的不同细胞。