Latching annihilation based logic gate

    公开(公告)号:US06583650B2

    公开(公告)日:2003-06-24

    申请号:US09989230

    申请日:2001-11-20

    IPC分类号: H03K190948

    CPC分类号: H03K19/0963

    摘要: The present invention provides a precharge circuit that has a first precharged node, a second precharged node, and a latch device. The first precharged node is charged to a high value during a precharge state. In response to a transition from the precharge state to an evaluate state, it either discharges to a low value or remains charged at its high value. The second precharged node has a value in the evaluate state that is based on the value of the first precharged node upon the circuit transitioning to the evaluate state. The latch device is connected to the second precharged node for latching this value in the evaluate state. With the latching device, this value is not affected by the first precharged node once the circuit has sufficiently transitioned to the evaluate state.

    Cache address conflict mechanism without store buffers
    2.
    发明授权
    Cache address conflict mechanism without store buffers 有权
    缓存地址冲突机制没有存储缓冲区

    公开(公告)号:US06539457B1

    公开(公告)日:2003-03-25

    申请号:US09510279

    申请日:2000-02-21

    IPC分类号: G06F1200

    CPC分类号: G06F12/0897

    摘要: The inventive cache manages address conflicts and maintains program order without using a store buffer. The cache utilizes an issue algorithm to insure that accesses issued in the same clock are actually issued in an order that is consistent with program order. This is enabled by performing address comparisons prior to insertion of the accesses into the queue. Additionally, when accesses are separated by one or more clocks, address comparisons are performed, and accesses that would get data from the cache memory array before a prior update has actually updated the cache memory in the array are canceled. This provides a guarantee that program order is maintained, as an access is not allowed to complete until it is assured that the most recent data will be received upon access of the array.

    摘要翻译: 本发明的缓存管理地址冲突并维护程序顺序而不使用存储缓冲器。 缓存利用问题算法来确保在同一时钟内发出的访问实际上是按照与程序顺序一致的顺序发出的。 这可以通过在将访问插入队列之前执行地址比较来实现。 此外,当访问被一个或多个时钟分开时,执行地址比较,并且取消在先前更新之前从高速缓存存储器阵列获取数据实际更新数组中的高速缓冲存储器的访问。 这提供了保证程序顺序的保证,因为访问不允许完成,直到确保在数组访问时将接收到最新的数据。

    Latching annihilation based logic gate
    3.
    发明授权
    Latching annihilation based logic gate 有权
    基于锁存湮灭的逻辑门

    公开(公告)号:US06459304B1

    公开(公告)日:2002-10-01

    申请号:US09510975

    申请日:2000-02-21

    IPC分类号: H03K190948

    CPC分类号: H03K19/0963

    摘要: The present invention provides a precharge circuit that has a first precharged node, a second precharged node, and a latch device. The first precharged node is charged to a high value during a precharge state. In response to a transition from the precharge state to an evaluate state, it either discharges to a low value or remains charged at its high value. The second precharged node has a value in the evaluate state that is based on the value of the first precharged node upon the circuit transitioning to the evaluate state. The latch device is connected to the second precharged node for latching this value in the evaluate state. With the latching device, this value is not affected by the first precharged node once the circuit has sufficiently transitioned to the evaluate state.

    摘要翻译: 本发明提供一种具有第一预充电节点,第二预充电节点和锁存装置的预充电电路。 在预充电状态期间,将第一预充电节点充电至高电平。 响应于从预充电状态到评估状态的转变,它或者放电到低值或者以其高的值保持充电。 第二预充电节点具有在电路转换到评估状态时基于第一预充电节点的值的评估状态中的值。 闩锁装置连接到第二预充电节点,用于在评估状态下锁定该值。 使用锁存装置,一旦电路充分转换到评估状态,该值不受第一预充电节点的影响。

    System and method for enabling/disabling SRAM banks for memory access
    4.
    发明授权
    System and method for enabling/disabling SRAM banks for memory access 有权
    用于启用/禁用SRAM库以进行存储器访问的系统和方法

    公开(公告)号:US06285579B1

    公开(公告)日:2001-09-04

    申请号:US09505561

    申请日:2000-02-17

    IPC分类号: G11C1100

    CPC分类号: G11C11/419

    摘要: A system and method are provided which enable a data carrier, such as a BIT line, to be held to a desired value while performing a memory access (e.g., a read or write operation) of SRAM in an efficient manner. In a preferred embodiment, cross-coupled PFETs are implemented to hold the BIT line to a desired value during a memory access of SRAM. As a result, a preferred embodiment enables a BIT line to transition from a high voltage value to a low voltage value free from conflict. That is, in a preferred embodiment, a holder PFET is not attempting to hold the BIT line high, while the SRAM or outside source (e.g., a “writing source”) is attempting to drive the BIT line to a low voltage value. Also, in a preferred embodiment, the BIT and NBIT lines (i.e., a complementary data carrier) can be driven to “true” low and “true” high voltage values. Accordingly, in a preferred embodiment, complex circuitry, such as a sense amp, is not required to detect whether a value on the lines is a logic 0 or logic 1. Therefore, a preferred embodiment enables memory access requests (e.g., read and write operations) to be serviced in a more timely manner than is achieved utilizing prior art implementations. Furthermore, a preferred embodiment requires less power consumption than is required for prior art implementations. Moreover, a preferred embodiment utilizes fewer components, and therefore consumes less surface area than in prior art implementations.

    摘要翻译: 提供了一种系统和方法,其使数据载体(例如BIT线)能够以有效的方式执行SRAM的存储器访问(例如,读或写操作)而被保持到期望值。 在优选实施例中,实现交叉耦合PFET以在SRAM的存储器访问期间将BIT线保持在期望值。 结果,优选实施例使得BIT线能够从高电压值转变为没有冲突的低电压值。 也就是说,在优选实施例中,保持器PFET没有试图将BIT线保持为高电平,而SRAM或外部源(例如,“写入源”)正试图将BIT线驱动到低电压值。 此外,在优选实施例中,可以将BIT和NBIT线(即,互补数据载体)驱动为“真实”低电平和“真实”高电压值。 因此,在优选实施例中,不需要诸如读出放大器的复杂电路来检测线路上的值是逻辑0还是逻辑1.因此,优选实施例使得存储器访问请求(例如,读取和写入 操作)比使用现有技术实现所实现的更及时地进行服务。 此外,优选实施例比现有技术实现所需的功耗要低。 此外,优选实施例利用较少的部件,因此消耗比现有技术实施方式更少的表面积。

    System and method utilizing speculative cache access for improved performance

    公开(公告)号:US06647464B2

    公开(公告)日:2003-11-11

    申请号:US09507546

    申请日:2000-02-18

    IPC分类号: G06F1200

    CPC分类号: G06F12/0855

    摘要: A system and method are disclosed which provide a cache structure that allows early access to the cache structure's data. A cache design is disclosed that, in response to receiving a memory access request, begins an access to a cache level's data before a determination has been made as to whether a true hit has been achieved for such cache level. That is, a cache design is disclosed that enables cache data to be speculatively accessed before a determination is made as to whether a memory address required to satisfy a received memory access request is truly present in the cache. In a preferred embodiment, the cache is implemented to make a determination as to whether a memory address required to satisfy a received memory access request is truly present in the cache structure (i.e., whether a “true” cache hit is achieved). Although, such a determination is not made before the cache data begins to be accessed. Rather, in a preferred embodiment, a determination of whether a true cache hit is achieved in the cache structure is performed in parallel with the access of the cache structure's data. Therefore, a preferred embodiment implements a parallel path by beginning the cache data access while a determination is being made as to whether a true cache hit has been achieved. Thus, the cache data is retrieved early from the cache structure and is available in a timely manner for use by a requesting execution unit.

    Integrated weak write test mode (WWWTM)
    6.
    发明授权
    Integrated weak write test mode (WWWTM) 有权
    集成弱写测试模式(WWWTM)

    公开(公告)号:US06192001B1

    公开(公告)日:2001-02-20

    申请号:US09510287

    申请日:2000-02-21

    IPC分类号: G11C810

    CPC分类号: G11C11/419

    摘要: The present invention integrates a WWTM circuit with the write driver circuitry, which is an inherent part of any conventional SRAM design. Thus, a circuit for writing data into and weak write testing a memory cell is provided. In one embodiment, the circuit comprises a write driver that has an output for applying a write or a weak write output signal at the memory cell. The write driver has first and second selectable operating modes. In the first mode, the write driver is set to apply a weak write output signal from the output for performing a weak write test on the cell. In the second mode, the write driver is set to apply a normal write output signal that is sufficiently strong for writing a data value into the cell when it is healthy.

    摘要翻译: 本发明将WWTM电路与写驱动器电路集成,该驱动器电路是任何常规SRAM设计的固有部分。 因此,提供了用于将数据写入并弱写入测试存储器单元的电路。 在一个实施例中,电路包括具有用于在存储器单元处施加写入或弱写入输出信号的输出的写入驱动器。 写驱动器具有第一和第二可选操作模式。 在第一模式中,写入驱动器被设置为从输出端施加弱写入输出信号,以对单元执行弱写入测试。 在第二模式中,写入驱动器被设置为施加足够强的正常写入输出信号,以便当数据值健康时将数据值写入单元。

    Circuit and circuit design method
    7.
    发明授权
    Circuit and circuit design method 失效
    电路和电路设计方法

    公开(公告)号:US07698673B2

    公开(公告)日:2010-04-13

    申请号:US10940703

    申请日:2004-09-14

    IPC分类号: G06F9/45 G06F17/50

    CPC分类号: G06F17/5045 H03K19/0963

    摘要: One disclosed embodiment may comprise a design method for a dynamic circuit system. The method may include providing a design for a single stage network comprising a pull-down network that is configured to perform a desired logic function according to a plurality of inputs. The method may also include designing a multi-stage network that includes at least two stages, each of the at least two stages including a pull-down network that receives a respective portion of the plurality of inputs and each of the at least two stages cooperating to perform the desired logic function.

    摘要翻译: 一个公开的实施例可以包括用于动态电路系统的设计方法。 该方法可以包括提供用于单级网络的设计,其包括被配置为根据多个输入执行期望的逻辑功能的下拉网络。 该方法还可以包括设计包括至少两个级的多级网络,所述至少两个级中的每一级包括接收所述多个输入的相应部分的下拉网络,并且所述至少两个阶段中的每一个协作 执行所需的逻辑功能。

    Cache chain structure to implement high bandwidth low latency cache memory subsystem
    8.
    发明授权
    Cache chain structure to implement high bandwidth low latency cache memory subsystem 有权
    缓存链结构实现高带宽低延迟高速缓存存储器子系统

    公开(公告)号:US06557078B1

    公开(公告)日:2003-04-29

    申请号:US09510283

    申请日:2000-02-21

    IPC分类号: G06F1300

    摘要: The inventive cache uses a queuing structure which provides out-of-order cache memory access support for multiple accesses, as well as support for managing bank conflicts and address conflicts. The inventive cache can support four data accesses that are hits per clocks, support one access that misses the L1 cache every clock, and support one instruction access every clock. The responses are interspersed in the pipeline, so that conflicts in the queue are minimized. Non-conflicting accesses are not inhibited, however, conflicting accesses are held up until the conflict clears. The inventive cache provides out-of-order support after the retirement stage of a pipeline.

    摘要翻译: 本发明的高速缓存使用排队结构,其为多个访问提供无序高速缓存存储器访问支持,以及用于管理银行冲突和地址冲突的支持。 本发明的高速缓存可以支持每个时钟命中的四个数据访问,支持每个时钟丢失L1缓存的一个访问,并且每个时钟支持一个指令访问。 响应散布在流水线中,从而使队列中的冲突最小化。 不冲突的访问不被禁止,但冲突的冲突消除之后,冲突的访问将被阻止。 本发明的缓存在管道的退役阶段之后提供无序支持。

    Virtual address bypassing using local page mask
    9.
    发明授权
    Virtual address bypassing using local page mask 有权
    虚拟地址绕过本地页面掩码

    公开(公告)号:US06446187B1

    公开(公告)日:2002-09-03

    申请号:US09507432

    申请日:2000-02-19

    IPC分类号: G06F1200

    摘要: A cache with a translation lookaside buffer (TLB) that reduces the time required for retrieval of a physical address from the TLB when accessing the cache in a system that supports variable page sizing. The TLB includes a content addressable memory (CAM) containing the virtual page numbers corresponding to pages in the cache and a random access memory (RAM) storing the physical page numbers of the pages corresponding to the virtual page numbers in the CAM. The physical page number RAM stores a page mask along with the physical page numbers, and includes local multiplexers which perform virtual address bypassing of the physical page number when the page has been masked.

    摘要翻译: 具有翻译后备缓冲器(TLB)的高速缓存,用于在支持可变页大小的系统中访问缓存时,减少从TLB检索物理地址所需的时间。 TLB包括包含对应于缓存中的页面的虚拟页码的内容可寻址存储器(CAM)和存储与CAM中的虚拟页码对应的页面的物理页码的随机存取存储器(RAM)。 物理页码RAM与物理页码一起存储页面掩码,并且包括在页面被屏蔽时执行物理页码的虚拟地址旁路的本地多路复用器。

    Multiple issue algorithm with over subscription avoidance feature to get high bandwidth through cache pipeline
    10.
    发明授权
    Multiple issue algorithm with over subscription avoidance feature to get high bandwidth through cache pipeline 有权
    具有超订阅避免功能的多问题算法通过缓存管道获得高带宽

    公开(公告)号:US06427189B1

    公开(公告)日:2002-07-30

    申请号:US09510973

    申请日:2000-02-21

    IPC分类号: G06F1300

    CPC分类号: G06F12/0846 G06F12/0897

    摘要: A multi-level cache structure and associated method of operating the cache structure are disclosed. The cache structure uses a queue for holding address information for a plurality of memory access requests as a plurality of entries. The queue includes issuing logic for determining which entries should be issued. The issuing logic further comprises find first logic for determining which entries meet a predetermined criteria and selecting a plurality of those entries as issuing entries. The issuing logic also comprises lost logic that delays the issuing of a selected entry for a predetermined time period based upon a delay criteria. The delay criteria may, for example, comprise a conflict between issuing resources, such as ports. Thus, in response to an issuing entry being oversubscribed, the issuing of such entry may be delayed for a predetermined time period (e.g., one clock cycle) to allow the resource conflict to clear.

    摘要翻译: 公开了一种操作高速缓存结构的多级缓存结构和相关联的方法。 高速缓存结构使用用于将多个存储器访问请求的地址信息保存为多个条目的队列。 队列包括用于确定应该发出哪些条目的发布逻辑。 发布逻辑还包括找到用于确定哪些条目符合预定标准的第一逻辑,并且选择多个这些条目作为发行条目。 发布逻辑还包括基于延迟准则延迟所选条目发布预定时间段的丢失逻辑。 延迟标准可以例如包括发布诸如端口的资源之间的冲突。 因此,响应于超额认购的发行条目,这样的条目的发布可以延迟预定时间段(例如,一个时钟周期),以允许资源冲突清除。