Method and system for implementing remstat protocol under inclusion and non-inclusion of L1 data in L2 cache to prevent read-read deadlock
    1.
    发明授权
    Method and system for implementing remstat protocol under inclusion and non-inclusion of L1 data in L2 cache to prevent read-read deadlock 失效
    在L2缓存中包含和不包含L1数据的实现remstat协议的方法和系统,以防止读取死锁

    公开(公告)号:US06587930B1

    公开(公告)日:2003-07-01

    申请号:US09404400

    申请日:1999-09-23

    IPC分类号: G06F1200

    CPC分类号: G06F12/0811 G06F12/0833

    摘要: A distributed system structure for a large-way, multi-bus, multiprocessor system using a bus-based cache-coherence protocol is provided. The distributed system structure contains an address switch, multiple memory subsystems, and multiple master devices, either processors, I/O agents, or coherent memory adapters, organized into a set of nodes supported by a node controller. The node controller receives commands from a master device, communicates with a master device as another master device or as a slave device, and queues commands received from a master device. The system allows for the implementation of a bus protocol that reports the state of a cache line to a master device along with the first beat of data delivery for a cacheable coherent Read. Since the achievement of coherency is distributed in time and space, the issue of data integrity is addressed through a variety of actions. In one implementation, the node controller helps to maintain cache coherency for commands by blocking a master device from receiving certain transactions so as to prevent Read-Read deadlocks. In another implementation, the master devices use a bus protocol that prevents Read-Read deadlocks in a distributed, multi-bus, multiprocessor system.

    摘要翻译: 提供了一种使用基于总线的高速缓存相干协议的大容量多总线多处理器系统的分布式系统结构。 分布式系统结构包含地址交换机,多个存储器子系统以及被组织成由节点控制器支持的一组节点的多个主设备,处理器,I / O代理或相干存储器适配器。 节点控制器从主设备接收命令,与主设备作为另一个主设备或从设备通信,并对从主设备接收的命令进行排队。 该系统允许执行总线协议,该总线协议将高速缓存行的状态与可高速缓存的相干读取的第一个数据传送节点一起报告给主设备。 由于实现一致性是在时间和空间上分配的,所以数据完整性的问题通过各种各样的动作来解决。 在一个实现中,节点控制器有助于通过阻止主设备接收某些事务来保持命令的高速缓存一致性,从而防止读取 - 读取死锁。 在另一实现中,主设备使用总线协议来防止分布式多总线多处理器系统中的读 - 读死锁。

    System and method that progressively prefetches additional lines to a
distributed stream buffer as the sequentiality of the memory accessing
is demonstrated
    2.
    发明授权
    System and method that progressively prefetches additional lines to a distributed stream buffer as the sequentiality of the memory accessing is demonstrated 失效
    证明了存储器访问的顺序性,逐渐预取附加行到分布式流缓冲器的系统和方法

    公开(公告)号:US5664147A

    公开(公告)日:1997-09-02

    申请号:US519031

    申请日:1995-08-24

    IPC分类号: G06F9/38 G06F12/08

    摘要: Within a data processing system implementing L1 and L2 caches and stream filters and buffers, prefetching of cache lines is performed in a progressive manner. In one mode, data may not be prefetched. In a second mode, two cache lines are prefetched wherein one line is prefetched into the L1 cache and the next line is prefetched into a stream buffer. In a third mode, more than two cache lines are prefetched at a time. As a result, additional cache lines are progressively prefetched to a data cache as the sequentiality of the accessing of cache lines in memory is demonstrated through sequential addressing requests along a data stream. Furthermore, the stream is physically distributed. In other words, at least one line, but not all lines, of the stream are placed within the cache.

    摘要翻译: 在实现L1和L2高速缓存以及流过滤器和缓冲器的数据处理系统内,以渐进方式执行高速缓存行的预取。 在一种模式下,数据可能不被预取。 在第二模式中,预取两条高速缓存线,其中一行被预取到L1高速缓存中,并且下一行被预取到流缓冲器中。 在第三种模式下,一次预取多于两条的高速缓存行。 结果,随着沿着数据流的顺序寻址请求来证明存储器中的高速缓存行的访问顺序性,逐渐地将预留有额外的高速缓存行提供给数据高速缓存。 此外,流是物理分布的。 换句话说,流的至少一行(但不是所有行)被放置在高速缓存内。

    System and method for prefetching data using a hardware prefetch mechanism
    3.
    发明授权
    System and method for prefetching data using a hardware prefetch mechanism 失效
    使用硬件预取机制预取数据的系统和方法

    公开(公告)号:US06535962B1

    公开(公告)日:2003-03-18

    申请号:US09435860

    申请日:1999-11-08

    IPC分类号: G06F1200

    摘要: A data processing system includes a processor having a first level cache and a prefetch engine. Coupled to the processor are a second level cache and a third level cache and a system memory. Prefetching of cache lines is performed into each of the first, second, and third level caches by the prefetch engine. Prefetch requests from the prefetch engine to the second and third level caches is performed over a private prefetch request bus, which is separate from the bus system that transfers data from the various cache levels to the processor.

    摘要翻译: 数据处理系统包括具有第一级高速缓存和预取引擎的处理器。 耦合到处理器的是二级缓存和第三级缓存和系统存储器。 通过预取引擎对高速缓存行的预取执行到第一,第二和第三级高速缓存中的每一个。 从预取引擎到第二和第三级高速缓存的预取请求通过专用预取请求总线执行,该专用预取请求总线与将数据从各种高速缓存级别传送到处理器的总线系统分开。

    Fixed snoop response time for source-clocked multiprocessor busses
    5.
    发明授权
    Fixed snoop response time for source-clocked multiprocessor busses 失效
    源时钟多处理器总线的固定侦听响应时间

    公开(公告)号:US07171445B2

    公开(公告)日:2007-01-30

    申请号:US10042103

    申请日:2002-01-07

    IPC分类号: G06F1/12 G06F13/40

    CPC分类号: G06F12/0831

    摘要: An interfacing logic is implemented in one or more processors and a memory controller in a multiprocessor system. The interfacing logic enables all processors to receive snoops and snoop responses substantially at the same time by delaying data transmitted over faster busses before the data is provided to a local logic at a receiving end of the faster busses. The interfacing logic comprises two or more paths of a multiplexer component connected to a storage component. The storage components are connected to another multiplexer component for selecting one of the two or more paths. Preferably, a bus control logic in the receiving end determines how much delay is performed to compensate for delay differences between data busses.

    摘要翻译: 在多处理器系统中的一个或多个处理器和存储器控制器中实现接口逻辑。 接口逻辑使得所有处理器能够在将数据提供给较快总线的接收端的本地逻辑之前通过延迟在更快的总线上传输的数据同时接收窥探和窥探响应。 接口逻辑包括连接到存储组件的多路复用器组件的两个或多个路径。 存储组件连接到另一个多路复用器组件,用于选择两个或更多个路径中的一个。 优选地,接收端中的总线控制逻辑确定执行多少延迟以补偿数据总线之间的延迟差。

    Method and apparatus for livelock prevention in a multiprocessor system
    6.
    发明授权
    Method and apparatus for livelock prevention in a multiprocessor system 失效
    多处理器系统中防止活动锁的方法和装置

    公开(公告)号:US06968431B2

    公开(公告)日:2005-11-22

    申请号:US09998397

    申请日:2001-11-15

    CPC分类号: G06F12/0831 G06F12/0813

    摘要: In a multiprocessor system using snooping protocols, system command conflicts are prevented by comparing processor commands with prior snoops within a specified time defined window. A determination is then made as to whether a command issued by a given processor is likely to cause a system conflict with another command issued within said specified time defined window. If so, the time of execution of any such snoop command determined as being likely to cause a system conflict is delayed. This approach uses address bus arbitration rules to prevent system livelocks due to both coherency and resource conflicts.

    摘要翻译: 在使用侦听协议的多处理器系统中,通过将处理器命令与指定的时间定义窗口中的先前侦听进行比较来防止系统命令冲突。 然后确定由给定处理器发出的命令是否可能导致与在所述指定的时间定义窗口内发出的另一命令的系统冲突。 如果是这样,则被确定为可能导致系统冲突的任何这种侦听命令的执行时间被延迟。 这种方法使用地址总线仲裁规则来防止由于一致性和资源冲突造成的系统活动锁定。

    Software prefetch system and method for predetermining amount of streamed data
    8.
    发明授权
    Software prefetch system and method for predetermining amount of streamed data 失效
    软件预取系统和预测流数据量的方法

    公开(公告)号:US06574712B1

    公开(公告)日:2003-06-03

    申请号:US09550180

    申请日:2000-04-14

    IPC分类号: G06F1208

    摘要: A data processing system includes a processor having a first level cache and a prefetch engine. Coupled to the processor are a second level cache and a third level cache and a system memory. Prefetching of cache lines is performed into each of the first, second, and third level caches by the prefetch engine. Prefetch requests from the prefetch engine to the second and third level caches is performed over a private prefetch request bus, which is separate from the bus system that transfers data from the various cache levels to the processor. A software instruction is used to accelerate the prefetch process by overriding the normal functionality of the hardware prefetch engine. The instruction also limits the amount of data to be prefetched.

    摘要翻译: 数据处理系统包括具有第一级高速缓存和预取引擎的处理器。 耦合到处理器的是二级缓存和第三级缓存和系统存储器。 通过预取引擎对高速缓存行的预取执行到第一,第二和第三级高速缓存中的每一个。 从预取引擎到第二和第三级高速缓存的预取请求通过专用预取请求总线执行,该专用预取请求总线与将数据从各种高速缓存级别传送到处理器的总线系统分开。 软件指令用于通过覆盖硬件预取引擎的正常功能来加速预取过程。 该指令还限制了要预取的数据量。

    System and method for indicating that a processor has prefetched data
into a primary cache and not into a secondary cache
    10.
    发明授权
    System and method for indicating that a processor has prefetched data into a primary cache and not into a secondary cache 失效
    用于指示处理器将数据预取到主高速缓存而不是进入二级高速缓存的系统和方法

    公开(公告)号:US5758119A

    公开(公告)日:1998-05-26

    申请号:US518347

    申请日:1995-08-23

    IPC分类号: G06F12/08

    CPC分类号: G06F12/0862 G06F12/0897

    摘要: Within a data processing system implementing L1 and L2 caches and stream filters and buffers, prefetching of cache lines is performed in a progressive manner. In one mode, data may not be prefetched. In a second mode, two cache lines are prefetched wherein one line is prefetched into the L1 cache and the next line is prefetched into a stream buffer. In a third mode, more than two cache lines are prefetched at a time. In the third mode cache lines may be prefetched to the L1 cache and not the L2 cache, resulting in no inclusion between the L1 and L2 caches. A directory field entry provides an indication of whether or not a particular cache line in the L1 cache is also included in the L2 cache.

    摘要翻译: 在实现L1和L2高速缓存以及流过滤器和缓冲器的数据处理系统内,以渐进方式执行高速缓存行的预取。 在一种模式下,数据可能不被预取。 在第二模式中,预取两条高速缓存线,其中一行被预取到L1高速缓存中,并且下一行被预取到流缓冲器中。 在第三种模式下,一次预取多于两条的高速缓存行。 在第三模式中,高速缓存行可以被预取到L1高速缓存,而不是L2缓存,导致L1和L2高速缓存之间不包括。 目录字段条目提供L1高速缓存中的特定高速缓存行是否也包括在L2高速缓存中的指示。