Apparatus and method for caching lock conditions in a multi-processor
system
    1.
    发明授权
    Apparatus and method for caching lock conditions in a multi-processor system 失效
    用于在多处理器系统中缓存锁定条件的装置和方法

    公开(公告)号:US6006299A

    公开(公告)日:1999-12-21

    申请号:US204592

    申请日:1994-03-01

    IPC分类号: G06F9/46 G06F13/08

    CPC分类号: G06F9/52

    摘要: In a computer system, an apparatus for handling lock conditions wherein a first instruction executed by a first processor processes data that is common to a second processor while the second processor is locked from simultaneously executing a second instruction that also processes this same data. A lock bit is set when the first processor begins execution of the first instruction. Thereupon, the second processor is prevented from executing its instruction until the first processor has completed its processing of the shared data. Hence, the second processor queues its request in a buffer. The lock bit is cleared after the first processor has completed execution of its instruction. The first processor then checks the buffer for any outstanding requests. In response to the second processor's queued request, the first processor transmits a signal to the second processor indicating that the data is now not locked.

    摘要翻译: 在计算机系统中,一种用于处理锁定条件的装置,其中由第一处理器执行的第一指令在第二处理器被锁定时处理与第二处理器相同的数据,同时执行也处理该相同数据的第二指令。 当第一个处理器开始执行第一个指令时,锁定位被置位。 于是,第二处理器被阻止执行其指令,直到第一处理器完成对共享数据的处理。 因此,第二处理器将其请求排队在缓冲器中。 在第一个处理器完成其指令执行后,锁定位被清零。 然后,第一个处理器检查缓冲区是否有任何未完成的请求。 响应于第二处理器的排队请求,第一处理器向第二处理器发送指示数据现在不被锁定的信号。

    Apparatus for maintaining multilevel cache hierarchy coherency in a
multiprocessor computer system
    2.
    发明授权
    Apparatus for maintaining multilevel cache hierarchy coherency in a multiprocessor computer system 失效
    用于在多处理器计算机系统中维持多级高速缓存层级一致性的装置

    公开(公告)号:US5715428A

    公开(公告)日:1998-02-03

    申请号:US639719

    申请日:1996-04-29

    IPC分类号: G06F12/08 G06F13/00

    CPC分类号: G06F12/0831 G06F12/0811

    摘要: A computer system comprising a plurality of caching agents with a cache hierarchy, the caching agents sharing memory across a system bus and issuing memory access requests in accordance with a protocol wherein a line of a cache has a present state comprising one of a plurality of line states. The plurality of line states includes a modified (M) state, wherein a line of a first caching agent in M state has data which is more recent than any other copy in the system; an exclusive (E) state, wherein a line in E state in a first caching agent is the only one of the agents in the system which has a copy of the data in a line of the cache, the first caching agent modifying the data in the cache line independent of other said agents coupled to the system bus; a shared (S) state, wherein a line in S state indicates that more than one of the agents has a copy of the data in the line; and an invalid (I) state indicating that the line does not exist in the cache. A read or a write to a line in I state results in a cache miss. The present invention associates states with lines and defines rules governing state transitions. State transitions depend on both processor generated activities and activities by other bus agents, including other processors. Data consistency is guaranteed in systems having multiple levels of cache and shared memory and/or multiple active agents, such that no agent ever reads stale data and actions are serialized as needed.

    摘要翻译: 一种计算机系统,包括具有高速缓存层级的多个高速缓存代理,所述高速缓存代理器通过系统总线共享存储器并根据协议发出存储器访问请求,其中高速缓存行具有包括多条线路之一的当前状态 状态。 多个行状态包括修改的(M)状态,其中M状态的第一高速缓存代理的行具有比系统中的任何其他副本更新的数据; 排除(E)状态,其中第一高速缓存代理中的E状态中的线是系统中唯一具有高速缓存行中的数据的副本的代理,第一高速缓存代理将数据修改为 所述高速缓存行独立于耦合到所述系统总线的其它所述代理; 共享(S)状态,其中S状态的行指示多于一个代理具有该行中的数据的副本; 和指示该行不存在于缓存中的无效(I)状态。 对I状态的行进行读取或写入会导致高速缓存未命中。 本发明将状态与线相关联并且定义了管理状态转换的规则。 状态转换取决于处理器生成的活动和其他总线代理(包括其他处理器)的活动。 在具有多级缓存和共享内存和/或多个活动代理的系统中保证数据一致性,使得任何代理程序都不会读取过时的数据,并且操作根据需要进行序列化。

    Write combining buffer for sequentially addressed partial line
operations originating from a single instruction
    3.
    发明授权
    Write combining buffer for sequentially addressed partial line operations originating from a single instruction 失效
    写入组合缓冲区,用于从单个指令发出的顺序寻址部分线路操作

    公开(公告)号:US5630075A

    公开(公告)日:1997-05-13

    申请号:US450397

    申请日:1995-05-25

    IPC分类号: G06F12/08 G06F13/12 G06F5/06

    CPC分类号: G06F12/0804 G06F13/126

    摘要: A microprocessor having a bus for the transmission of data, an execution unit for processing data and instructions, a memory for storing data and instructions, and a write combining buffer for combining data of at least two write commands into a single data set, wherein the combined data set is transmitted over the bus in one clock cycle rather than two or more clock cycles. Thereby, buss traffic is minimized. The write combining buffer is comprised of a single line having a 32-byte data portion, a tag portion, and a validity portion. The tag entry specifies the address corresponding to the data currently stored in the data portion. There is one valid bit corresponding to each byte of the data portion which specifies whether that byte currently contains useful data. So long as subsequent write operations to the write combining buffer result in hits, the data is written to the buffer's data portion. But when a miss occurs, the line is reallocated, and the old data is written to the main memory. Thereupon, the valid bits are cleared, and the new data and its address are written to the write combining buffer.

    摘要翻译: 具有用于传输数据的总线的微处理器,用于处理数据和指令的执行单元,用于存储数据和指令的存储器以及用于将至少两个写入命令的数据组合成单个数据集的写入组合缓冲器,其中, 组合数据集在一个时钟周期内通过总线传输,而不是两个或更多个时钟周期。 因此,总线流量被最小化。 写合并缓冲器由具有32字节数据部分,标签部分和有效部分的单行组成。 标签条目指定与当前存储在数据部分中的数据相对应的地址。 存在与数据部分的每个字节相对应的一个有效位,其指定该字节当前是否包含有用数据。 只要对写入组合缓冲区的后续写操作导致命中,数据将被写入缓冲区的数据部分。 但是当发生小命令时,将重新分配该行,并将旧数据写入主存储器。 因此,有效位被清除,并且新的数据及其地址被写入写入组合缓冲器。

    Method and apparatus for implementing a single clock cycle line
replacement in a data cache unit
    4.
    发明授权
    Method and apparatus for implementing a single clock cycle line replacement in a data cache unit 失效
    用于在数据高速缓存单元中实现单个时钟周期线替换的方法和装置

    公开(公告)号:US5526510A

    公开(公告)日:1996-06-11

    申请号:US315889

    申请日:1994-09-30

    IPC分类号: G06F12/08

    CPC分类号: G06F12/0831 G06F12/0859

    摘要: The data cache unit includes a separate fill buffer and a separate write-back buffer. The fill buffer stores one or more cache lines for transference into data cache banks of the data cache unit. The write-back buffer stores a single cache line evicted from the data cache banks prior to write-back to main memory. Circuitry is provided for transferring a cache line from the fill buffer into the data cache banks while simultaneously transferring a victim cache line from the data cache banks into the write-back buffer. Such allows the overall replace operation to be performed in only a single clock cycle. In a particular implementation, the data cache unit is employed within a microprocessor capable of speculative and out-of-order processing of memory instructions. Moreover, the microprocessor is incorporated within a multiprocessor computer system wherein each microprocessor is capable of snooping the cache lines of data cache units of each other microprocessor. The data cache unit is also a non-blocking cache.

    摘要翻译: 数据高速缓存单元包括单独的填充缓冲器和单独的回写缓冲器。 填充缓冲器存储用于转移到数据高速缓存单元的数据高速缓存组中的一个或多个高速缓存行。 回写缓冲器在回写到主存储器之前存储从数据高速缓冲存储器中逐出的单个高速缓存行。 提供电路用于将高速缓存行从填充缓冲器传送到数据高速缓存组,同时将受害缓存行从数据高速缓冲存储体传输到回写缓冲器。 这样允许整个替换操作仅在单个时钟周期中执行。 在特定实现中,在能够对存储器指令进行推测和无序处理的微处理器中采用数据高速缓存单元。 此外,微处理器并入多处理器计算机系统中,其中每个微处理器能够窥探每个其他微处理器的数据高速缓存单元的高速缓存行。 数据高速缓存单元也是非阻塞缓存。

    Methods and apparatus for caching data in a non-blocking manner using a
plurality of fill buffers
    5.
    发明授权
    Methods and apparatus for caching data in a non-blocking manner using a plurality of fill buffers 失效
    使用多个填充缓冲器以非阻塞方式高速缓存数据的方法和装置

    公开(公告)号:US5671444A

    公开(公告)日:1997-09-23

    申请号:US731545

    申请日:1996-10-15

    IPC分类号: G06F12/08 G06F13/00

    CPC分类号: G06F12/0859 G06F12/0831

    摘要: A data cache and a plurality of companion fill buffers having corresponding tag matching circuitry are provided to a computer system. Each fill buffer independently stores and tracks a replacement cache line being filled with data returning from main memory in response to a cache miss. When the cache fill is completed, the replacement cache line is output for the cache tag and data arrays of the data cache if the memory locations are cacheable and the cache line has not been snoop hit while the cache fill was in progress. Additionally, the fill buffers are organized and provided with sufficient address and data ports as well as selectors to allow the fill buffers to respond to subsequent processor loads and stores, and external snoops that hit their cache lines while the cache fills are in progress. As a result, the cache tag and data arrays of the data cache can continue to serve subsequent processor loads and stores, and external snoops, while one or more cache fills are in progress, without ever having to stall the processor.

    摘要翻译: 具有对应的标签匹配电路的数据高速缓存和多个伴随填充缓冲器被提供给计算机系统。 每个填充缓冲器独立地存储和跟踪填充有响应于高速缓存未命中从主存储器返回的数据的替换高速缓存行。 当缓存填充完成时,如果内存位置是可高速缓存的,并且缓存填充正在进行时,缓存线尚未被窥探,则会为高速缓存标签和数据高速缓存的数据阵列输出替换高速缓存行。 此外,填充缓冲区被组织并提供有足够的地址和数据端口以及选择器,以允许填充缓冲区响应后续处理器负载和存储,以及在高速缓存填充正在进行时触发其缓存行的外部监听。 因此,数据高速缓存的高速缓存标签和数据阵列可以在一个或多个缓存填充正在进行的同时继续提供后续的处理器加载和存储以及外部监听,而无需停止处理器。

    Logic verification in large systems
    6.
    发明授权
    Logic verification in large systems 失效
    大型系统中的逻辑验证

    公开(公告)号:US07171347B2

    公开(公告)日:2007-01-30

    申请号:US09347690

    申请日:1999-07-02

    IPC分类号: G06F17/50 G06F9/455 G06F9/45

    CPC分类号: G06F17/5022

    摘要: A method of preparing a circuit model for simulation comprises decomposing the circuit model having a number of latches into a plurality of extended latch boundary components and partitioning the plurality of extended latch boundary components. Decomposing and partitioning the circuit model may include decomposing hierarchical cells of the circuit model, and using a constructive bin-packing heuristic to partition the plurality of extended latch boundary components. The partitioned circuit model is compiled, and simulated on a uni-processor, a multi-processor, or a distributed processing computer system.

    摘要翻译: 一种制备用于模拟的电路模型的方法包括将具有多个锁存器的电路模型分解成多个扩展锁存边界部件并且分割多个扩展锁存器边界部件。 分解和划分电路模型可以包括分解电路模型的分级单元,并且使用建设性的bin-packing启发式来分割多个扩展的锁存边界组件。 分割电路模型被编译,并在单处理器,多处理器或分布式处理计算机系统上进行模拟。

    Cache memory system having data and tag arrays and multi-purpose buffer
assembly with multiple line buffers
    8.
    发明授权
    Cache memory system having data and tag arrays and multi-purpose buffer assembly with multiple line buffers 失效
    具有数据和标签数组的高速缓冲存储器系统以及具有多个行缓冲器的多用途缓冲器组件

    公开(公告)号:US5680572A

    公开(公告)日:1997-10-21

    申请号:US680109

    申请日:1996-07-15

    IPC分类号: G06F12/08

    CPC分类号: G06F12/0859

    摘要: A data cache and a plurality of companion fill buffers having corresponding tag matching circuitry are provided to a computer system. Each fill buffer independently stores and tracks a replacement cache line being filled with data returning from main memory in response to a cache miss. When the cache fill is completed, the replacement cache line is output for the cache tag and data arrays of the data cache if the memory locations are cacheable and the cache line has not been snoop hit while the cache fill was in progress. Additionally, the fill buffers are organized and provided with sufficient address and data ports as well as selectors to allow the fill buffers to respond to subsequent processor loads and stores, and external snoops that hit their cache lines while the cache fills are in progress. As a result, the cache tag and data arrays of the data cache can continue to serve subsequent processor loads and stores, and external snoops, while one or more cache fills are in progress, without ever having to stall the processor.

    摘要翻译: 具有对应的标签匹配电路的数据高速缓存和多个伴随填充缓冲器被提供给计算机系统。 每个填充缓冲器独立地存储和跟踪填充有响应于高速缓存未命中从主存储器返回的数据的替换高速缓存行。 当缓存填充完成时,如果内存位置是可高速缓存的,并且缓存填充正在进行时,缓存线尚未被窥探,则会为高速缓存标签和数据高速缓存的数据阵列输出替换高速缓存行。 此外,填充缓冲区被组织并提供有足够的地址和数据端口以及选择器,以允许填充缓冲区响应后续处理器负载和存储,以及在高速缓存填充正在进行时触发其缓存行的外部监听。 因此,数据高速缓存的高速缓存标签和数据阵列可以在一个或多个缓存填充正在进行的同时继续提供后续的处理器加载和存储以及外部监听,而无需停止处理器。

    Method and apparatus for performing distributed simulation utilizing a simulation backplane
    9.
    发明授权
    Method and apparatus for performing distributed simulation utilizing a simulation backplane 有权
    利用仿真背板执行分布式仿真的方法和装置

    公开(公告)号:US07319947B1

    公开(公告)日:2008-01-15

    申请号:US09470875

    申请日:1999-12-22

    IPC分类号: G06F17/50 G06F9/45 G06G7/62

    CPC分类号: G06F17/5022

    摘要: A method and apparatus for performing distributed simulation is presented. According to an embodiment of the present invention, simulators are interfaced to a simulation backplane via simulator-dependent interfaces (SDI's). The simulators exchange messages via the simulation backplane and the SDI's. The SDI's convert the exchanged messages between a data format supported by the backplane and a data format supported by the simulator to which the interface is connected. By interfacing the simulators with the backplane via SDI's, the validation environment may be changed without reconfiguring the backplane.

    摘要翻译: 提出了一种执行分布式仿真的方法和装置。 根据本发明的实施例,模拟器通过与模拟器相关的接口(SDI)连接到仿真背板。 模拟器通过仿真背板和SDI交换信息。 SDI将交换的消息转换为背板支持的数据格式与接口所连接的模拟器支持的数据格式。 通过SDI将模拟器与背板接口,可以在不重新配置背板的情况下更改验证环境。

    Word line decoder for dual-port cache memory
    10.
    发明授权
    Word line decoder for dual-port cache memory 有权
    用于双端口缓存的字线解码器

    公开(公告)号:US06198684B1

    公开(公告)日:2001-03-06

    申请号:US09471654

    申请日:1999-12-23

    IPC分类号: G11C800

    CPC分类号: G11C8/16

    摘要: In one embodiment, a memory cell having a first port and a second port is provided. A first word line is associated with the first port, and a second word line is associated with the second port. A first driver is associated with the first word line, and a second driver is associated with the second word line. A decoder is associated with the first and second drivers.

    摘要翻译: 在一个实施例中,提供具有第一端口和第二端口的存储单元。 第一字线与第一端口相关联,第二字线与第二端口相关联。 第一驱动器与第一字线相关联,并且第二驱动器与第二字线相关联。 解码器与第一和第二驱动器相关联。