Method and system for early tag accesses for lower-level caches in parallel with first-level cache
    1.
    发明授权
    Method and system for early tag accesses for lower-level caches in parallel with first-level cache 有权
    与一级缓存并行的低级缓存的早期标签访问方法和系统

    公开(公告)号:US06427188B1

    公开(公告)日:2002-07-30

    申请号:US09501396

    申请日:2000-02-09

    IPC分类号: G06F1200

    摘要: A system and method are disclosed which determine in parallel for multiple levels of a multi-level cache whether any one of such multiple levels is capable of satisfying a memory access request. Tags for multiple levels of a multi-level cache are accessed in parallel to determine whether the address for a memory access request is contained within any of the multiple levels. For instance, in a preferred embodiment, the tags for the first level of cache and the tags for the second level of cache are accessed in parallel. Also, additional levels of cache tags up to N levels may be accessed in parallel with the first-level cache tags. Thus, by the end of the access of the first-level cache tags it is known whether a memory access request can be satisfied by the first-level, second-level, or any additional N-levels of cache that are accessed in parallel. Additionally, in a preferred embodiment, the multi-level cache is arranged such that the data array of a level of cache is accessed only if it is determined that such level of cache is capable of satisfying a received memory access request. Additionally, in a preferred embodiment the multi-level cache is partitioned into N ways of associativity, and only a single way of a data array is accessed to satisfy a memory access request, thereby preserving the remaining ways of a data array to save power and resources that may be accessed to satisfy other instructions.

    摘要翻译: 公开了一种系统和方法,其并行地确定多级高速缓存的多个级别,无论这样的多个级别中的任何一个是否能够满足存储器访问请求。 并行访问多级高速缓存的多级别的标签,以确定存储器访问请求的地址是否包含在多个级别中的任一级内。 例如,在优选实施例中,并行地访问用于第一级高速缓存的标签和用于第二级高速缓存的标签。 此外,可以与第一级缓存标签并行访问高达N级的缓存标签的附加级别。 因此,通过第一级缓存标签的访问结束,已知存储器访问请求是否可以由并行访问的高级缓存的第一级,第二级或任何附加N级满足。 此外,在优选实施例中,多级缓存被布置成使得仅当确定这种级别的高速缓存能够满足所接收的存储器访问请求时才能访问高速缓存级的数据阵列。 此外,在优选实施例中,多级缓存被分为N个关联方式,并且仅访问数据阵列的单一方式以满足存储器访问请求,从而保留数据阵列的剩余方式以节省功率, 可以访问的资源以满足其他指令。

    Cache chain structure to implement high bandwidth low latency cache memory subsystem
    2.
    发明授权
    Cache chain structure to implement high bandwidth low latency cache memory subsystem 有权
    缓存链结构实现高带宽低延迟高速缓存存储器子系统

    公开(公告)号:US06557078B1

    公开(公告)日:2003-04-29

    申请号:US09510283

    申请日:2000-02-21

    IPC分类号: G06F1300

    摘要: The inventive cache uses a queuing structure which provides out-of-order cache memory access support for multiple accesses, as well as support for managing bank conflicts and address conflicts. The inventive cache can support four data accesses that are hits per clocks, support one access that misses the L1 cache every clock, and support one instruction access every clock. The responses are interspersed in the pipeline, so that conflicts in the queue are minimized. Non-conflicting accesses are not inhibited, however, conflicting accesses are held up until the conflict clears. The inventive cache provides out-of-order support after the retirement stage of a pipeline.

    摘要翻译: 本发明的高速缓存使用排队结构,其为多个访问提供无序高速缓存存储器访问支持,以及用于管理银行冲突和地址冲突的支持。 本发明的高速缓存可以支持每个时钟命中的四个数据访问,支持每个时钟丢失L1缓存的一个访问,并且每个时钟支持一个指令访问。 响应散布在流水线中,从而使队列中的冲突最小化。 不冲突的访问不被禁止,但冲突的冲突消除之后,冲突的访问将被阻止。 本发明的缓存在管道的退役阶段之后提供无序支持。

    Masking error detection/correction latency in multilevel cache transfers
    3.
    发明授权
    Masking error detection/correction latency in multilevel cache transfers 有权
    多级缓存传输中的掩码错误检测/校正延迟

    公开(公告)号:US06874116B2

    公开(公告)日:2005-03-29

    申请号:US10443103

    申请日:2003-05-22

    CPC分类号: G06F11/1064 G06F12/0897

    摘要: A method, and a corresponding apparatus, mask error detection and correction latency during multilevel cache transfers. The method includes the steps of transferring error protection encoded data lines from a first cache, checking the error protection encoded data lines for errors, wherein the checking is completed after the transferring begins, receiving the error protection encoded data lines in a second cache, and upon detecting an error in a data line, preventing further transfer of the data line from the second cache.

    摘要翻译: 一种方法和相应的装置,在多级缓存传输期间的掩码错误检测和校正延迟。 该方法包括以下步骤:从第一高速缓存传送错误保护编码数据线,检查错误保护编码数据线的错误,其中在传送开始之后完成检查,在第二高速缓存中接收错误保护编码数据线,以及 在检测到数据线中的错误时,防止数据线从第二高速缓存进一步传送。

    Masking error detection/correction latency in multilevel cache transfers
    4.
    发明授权
    Masking error detection/correction latency in multilevel cache transfers 有权
    多级缓存传输中的掩码错误检测/校正延迟

    公开(公告)号:US06591393B1

    公开(公告)日:2003-07-08

    申请号:US09507208

    申请日:2000-02-18

    IPC分类号: G11C2900

    CPC分类号: G06F11/1064 G06F12/0897

    摘要: Methods and apparatus mask the latency of error detection and/or error correction applied to data transferred between a first memory and a second memory. The method comprises determining whether there is an error in a data unit in the first memory; transferring data based on the data unit from the first memory to a second memory, wherein the transferring step commences before completion of the determining step; and disabling at least part of the second memory if the determining step detects an error in the data unit. The disabling step may be accomplished, for example, by disabling the buffering of an address of the data unit or stalling the second memory.

    摘要翻译: 方法和装置掩盖应用于在第一存储器和第二存储器之间传送的数据的错误检测和/或纠错的等待时间。 该方法包括确定第一存储器中的数据单元是否存在错误; 基于所述数据单元将数据从所述第一存储器传送到第二存储器,其中所述传送步骤在所述确定步骤完成之前开始; 以及如果所述确定步骤检测到所述数据单元中的错误,则禁用所述第二存储器的至少一部分。 禁用步骤可以例如通过禁止缓冲数据单元的地址或停止第二存储器来实现。

    L1 cache memory
    5.
    发明授权
    L1 cache memory 有权
    L1高速缓存

    公开(公告)号:US06507892B1

    公开(公告)日:2003-01-14

    申请号:US09510285

    申请日:2000-02-21

    IPC分类号: G06F1300

    CPC分类号: G06F12/0857 G06F12/0831

    摘要: The inventive cache processes multiple access requests simultaneously by using separate queuing structures for data and instructions. The inventive cache uses ordering mechanisms that guarantee program order when there are address conflicts and architectural ordering requirements. The queuing structures are snoopable by other processors of a multiprocessor system. The inventive cache has a tag access bypass around the queuing structures, to allow for speculative checking by other levels of cache and for lower latency if the queues are empty. The inventive cache allows for at least four accesses to be processed simultaneously. The results of the access can be sent to multiple consumers. The multiported nature of the inventive cache allows for a very high bandwidth to be processed through this cache with a low latency.

    摘要翻译: 本发明的高速缓存通过使用用于数据和指令的单独的排队结构同时处理多个访问请求。 本发明的高速缓存使用排序机制,当存在地址冲突和架构排序要求时,保证程序顺序。 排队结构可以被多处理器系统的其他处理器窥探。 本发明的高速缓存具有围绕排队结构的标签访问绕过,以允许其他级别的高速缓存的推测性检查以及如果队列为空,则延迟较低。 本发明的缓存允许同时处理至少四个访问。 访问的结果可以发送给多个消费者。 本发明的高速缓存的多端口性质允许通过具有低等待时间的该缓存来处理非常高的带宽。

    Unified cache port consolidation
    6.
    发明授权
    Unified cache port consolidation 有权
    统一缓存端口整合

    公开(公告)号:US06704820B1

    公开(公告)日:2004-03-09

    申请号:US09507033

    申请日:2000-02-18

    IPC分类号: G06F1200

    CPC分类号: G06F12/0857

    摘要: A method and apparatus consolidate ports on a unified cache. The apparatus uses plurality of access connections with a single port of a memory. The apparatus comprises multiplexor and a logic circuit. The multiplexor is connected to the plurality of access connections. The multiplexor has a control input and a memory connection. The logic circuit produces an output signal tied to the control input. In another form, the apparatus comprises means for selectively coupling a single one of the plurality of access connections to the memory, and a means for controlling the means for coupling. Preferably, the plurality of access connections comprise a data connection and an instruction connection, and the memory is cache memory. The method uses a single memory access connection for a plurality of access types. The method accepts one or more memory access requests on one or more respective ones of a plurality of connections. If there are memory access requests simultaneously active on two or more of the plurality of connections, then the method selects one of the simultaneously active connections and connects the selected connection to the single memory access connection.

    摘要翻译: 方法和装置整合统一缓存上的端口。 该装置使用与存储器的单个端口的多个接入连接。 该装置包括多路复用器和逻辑电路。 复用器连接到多个接入连接。 多路复用器具有控制输入和存储器连接。 逻辑电路产生一个与控制输入相连的输出信号。 在另一种形式中,该装置包括用于选择性地将多个接入连接中的单个接口连接到存储器的装置,以及用于控制用于耦合的装置的装置。 优选地,多个接入连接包括数据连接和指令连接,并且存储器是高速缓冲存储器。 该方法使用用于多个接入类型的单个存储器访问连接。 该方法在多个连接中的一个或多个相应的连接上接受一个或多个存储器访问请求。 如果在多个连接中的两个或更多个连接上同时存在存储器访问请求,则该方法选择同时活动的连接之一并将所选择的连接连接到单个存储器访问连接。

    Multiple issue algorithm with over subscription avoidance feature to get high bandwidth through cache pipeline
    7.
    发明授权
    Multiple issue algorithm with over subscription avoidance feature to get high bandwidth through cache pipeline 有权
    具有超订阅避免功能的多问题算法通过缓存管道获得高带宽

    公开(公告)号:US06427189B1

    公开(公告)日:2002-07-30

    申请号:US09510973

    申请日:2000-02-21

    IPC分类号: G06F1300

    CPC分类号: G06F12/0846 G06F12/0897

    摘要: A multi-level cache structure and associated method of operating the cache structure are disclosed. The cache structure uses a queue for holding address information for a plurality of memory access requests as a plurality of entries. The queue includes issuing logic for determining which entries should be issued. The issuing logic further comprises find first logic for determining which entries meet a predetermined criteria and selecting a plurality of those entries as issuing entries. The issuing logic also comprises lost logic that delays the issuing of a selected entry for a predetermined time period based upon a delay criteria. The delay criteria may, for example, comprise a conflict between issuing resources, such as ports. Thus, in response to an issuing entry being oversubscribed, the issuing of such entry may be delayed for a predetermined time period (e.g., one clock cycle) to allow the resource conflict to clear.

    摘要翻译: 公开了一种操作高速缓存结构的多级缓存结构和相关联的方法。 高速缓存结构使用用于将多个存储器访问请求的地址信息保存为多个条目的队列。 队列包括用于确定应该发出哪些条目的发布逻辑。 发布逻辑还包括找到用于确定哪些条目符合预定标准的第一逻辑,并且选择多个这些条目作为发行条目。 发布逻辑还包括基于延迟准则延迟所选条目发布预定时间段的丢失逻辑。 延迟标准可以例如包括发布诸如端口的资源之间的冲突。 因此,响应于超额认购的发行条目,这样的条目的发布可以延迟预定时间段(例如,一个时钟周期),以允许资源冲突清除。

    Apparatus and method using a semaphore buffer for semaphore instructions
    8.
    发明授权
    Apparatus and method using a semaphore buffer for semaphore instructions 失效
    使用信号量缓冲区进行信号量指令的装置和方法

    公开(公告)号:US5696939A

    公开(公告)日:1997-12-09

    申请号:US536534

    申请日:1995-09-29

    摘要: A simplified semaphore method and apparatus for simultaneous execution of multiple semaphore instructions and for enforcement of necessary ordering. A central processing unit having an instruction pipeline is coupled with a data cache arrangement including a semaphore buffer, a data cache, and the semaphore execution unit. An initial semaphore instruction having one or more operands and a semaphore address are transmitted from the instruction pipeline to the semaphore buffer, which in turn are transmitted from the semaphore buffer to the semaphore execution unit. The semaphore address of the initial semaphore instruction is transmitted from the instruction pipeline to the data cache to retrieve initial semaphore data stored within the data cache at a location in a data line of the data cache as identified by the semaphore address. The semaphore instruction is executed within the semaphore execution unit by operating upon the initial semaphore data and the one or more semaphore operands so as to produce processed semaphore data, which is then stored within the data cache. Since the semaphore buffer provides for entries of multiple semaphore instructions, the semaphore buffer initiates simultaneous execution of multiple semaphore instructions, as needed.

    摘要翻译: 用于同时执行多个信号量指令并执行必要排序的简化信号量方法和装置。 具有指令流水线的中央处理单元与包括信号量缓冲器,数据高速缓存和信号量执行单元的数据高速缓存装置耦合。 具有一个或多个操作数和信号量地址的初始信号量指令从指令流水线发送到信号量缓冲器,信号量缓冲器又从信号量缓冲器发送到信号量执行单元。 初始信号量指令的信号量地址从指令流水线发送到数据高速缓存,以在信号量地址识别的数据高速缓存的数据行中的位置检索存储在数据高速缓存内的初始信号量数据。 信号量指令通过操作初始信号量数据和一个或多个信号量操作数在信号量执行单元内执行,以产生处理后的信号量数据,然后存储在数据高速缓存中。 由于信号量缓冲器提供多个信号量指令的条目,因此信号量缓冲区根据需要启动多个信号量指令的同时执行。

    Method and apparatus for queue issue pointer
    9.
    发明授权
    Method and apparatus for queue issue pointer 失效
    队列问题指针的方法和装置

    公开(公告)号:US06826573B1

    公开(公告)日:2004-11-30

    申请号:US09504205

    申请日:2000-02-15

    IPC分类号: G06F700

    摘要: A method of generating an issue pointer for issuing data structures from a queue, comprising generating a signal that indicates where one or more of the data structures within the queue that desire to issue are located within the queue. Then, checking the signal at a queue location pointed to by an issue pointer. Then, incrementing the position of the issue pointer if a data structure has not shifted into the queue location since the previous issue and if the issue pointer is pointing to the location having issued on the previous queue issue or holding the issue pointer position if a data structure has shifted into the location since the previous issue and if the issue pointer is pointing to the location having issued on the previous queue issue.

    摘要翻译: 一种生成用于从队列发布数据结构的问题指针的方法,包括生成指示队列内的一个或多个数据结构位于队列内的信号。 然后,检查由问题指针指向的队列位置处的信号。 然后,如果数据结构自上一个问题以来,如果数据结构没有转入队列位置,并且如果问题指针指向已发布在先前队列问题上的位置,或者如果数据存在,则增加问题指针的位置 结构已经从上一个问题转移到该位置,并且如果问题指针指向在先前队列问题上发布的位置。

    Cache address conflict mechanism without store buffers
    10.
    发明授权
    Cache address conflict mechanism without store buffers 有权
    缓存地址冲突机制没有存储缓冲区

    公开(公告)号:US06539457B1

    公开(公告)日:2003-03-25

    申请号:US09510279

    申请日:2000-02-21

    IPC分类号: G06F1200

    CPC分类号: G06F12/0897

    摘要: The inventive cache manages address conflicts and maintains program order without using a store buffer. The cache utilizes an issue algorithm to insure that accesses issued in the same clock are actually issued in an order that is consistent with program order. This is enabled by performing address comparisons prior to insertion of the accesses into the queue. Additionally, when accesses are separated by one or more clocks, address comparisons are performed, and accesses that would get data from the cache memory array before a prior update has actually updated the cache memory in the array are canceled. This provides a guarantee that program order is maintained, as an access is not allowed to complete until it is assured that the most recent data will be received upon access of the array.

    摘要翻译: 本发明的缓存管理地址冲突并维护程序顺序而不使用存储缓冲器。 缓存利用问题算法来确保在同一时钟内发出的访问实际上是按照与程序顺序一致的顺序发出的。 这可以通过在将访问插入队列之前执行地址比较来实现。 此外,当访问被一个或多个时钟分开时,执行地址比较,并且取消在先前更新之前从高速缓存存储器阵列获取数据实际更新数组中的高速缓冲存储器的访问。 这提供了保证程序顺序的保证,因为访问不允许完成,直到确保在数组访问时将接收到最新的数据。