DIRECTORY CACHE ALLOCATION BASED ON SNOOP RESPONSE INFORMATION
    2.
    发明申请
    DIRECTORY CACHE ALLOCATION BASED ON SNOOP RESPONSE INFORMATION 审中-公开
    基于SNOOP响应信息的目录高速缓存分配

    公开(公告)号:US20100332762A1

    公开(公告)日:2010-12-30

    申请号:US12495722

    申请日:2009-06-30

    IPC分类号: G06F12/08 G06F12/00

    CPC分类号: G06F12/082

    摘要: Methods and apparatus relating to directory cache allocation that is based on snoop response information are described. In one embodiment, an entry in a directory cache may be allocated for an address in response to a determination that another caching agent has a copy of the data corresponding to the address. Other embodiments are also disclosed.

    摘要翻译: 描述了基于窥探响应信息的与目录高速缓存分配有关的方法和装置。 在一个实施例中,响应于确定另一高速缓存代理具有与该地址对应的数据的副本,可以为地址分配目录高速缓存中的条目。 还公开了其他实施例。

    System and method for avoiding deadlock
    3.
    发明授权
    System and method for avoiding deadlock 有权
    避免死锁的系统和方法

    公开(公告)号:US07203775B2

    公开(公告)日:2007-04-10

    申请号:US10337833

    申请日:2003-01-07

    IPC分类号: G06F3/00

    CPC分类号: G06F9/524

    摘要: A system and method avoids deadlock, such as circular routing deadlock, in a computer system by providing a virtual buffer at main memory. The computer system has an interconnection network that couples a plurality of processors having access to main memory. The interconnection network includes one or more routing agents each having at least one buffer for storing packets that are to be forwarded. When the routing agent's buffer becomes full, thereby preventing it from accepting any additional packets, the routing agent transfers at least one packet into the virtual buffer. By transferring a packet out of the buffer, the routing agent frees up space allowing it to accept a new packet. If the newly accepted packet also results in the buffer becoming full, another packet is transferred into the virtual buffer. This process is repeated until the deadlock condition is resolved. Packets are then retrieved from the virtual buffer.

    摘要翻译: 系统和方法通过在主存储器中提供虚拟缓冲区来避免计算机系统中的死循环,例如循环路由死锁。 计算机系统具有将具有访问主存储器的多个处理器耦合的互连网络。 互连网络包括一个或多个路由代理,每个路由代理具有至少一个用于存储要转发的分组的缓冲器。 当路由代理的缓冲区变满时,路由代理将至少一个数据包传输到虚拟缓冲区中。 通过将数据包从缓冲区传送出去,路由代理释放了允许它接受新数据包的空间。 如果新接受的分组也导致缓冲区变满,则另一分组被传送到虚拟缓冲器中。 重复该过程,直到死锁状态得到解决。 然后从虚拟缓冲区中检索数据包。

    Mechanism for selectively imposing interference order between page-table fetches and corresponding data fetches
    4.
    发明授权
    Mechanism for selectively imposing interference order between page-table fetches and corresponding data fetches 失效
    选择性地强制页表提取之间的干扰顺序和相应数据提取的机制

    公开(公告)号:US06286090B1

    公开(公告)日:2001-09-04

    申请号:US09084621

    申请日:1998-05-26

    IPC分类号: G06F1200

    CPC分类号: G06F12/1054 G06F12/0813

    摘要: A technique selectively imposes inter-reference ordering between memory reference operations issued by a processor of a multiprocessor system to addresses within a page pertaining to a page table entry (PTE) that is affected by a translation buffer (TB) miss flow routine. The TB miss flow is used to retrieve information contained in the PTE for mapping a virtual address to a physical address and, subsequently, to allow retrieval of data at the mapped physical address. The PTE that is retrieved in response to a memory reference (read) operation is not loaded into the TB until a commit-signal associated with that read operation is returned to the processor. Once the PTE and associated commit-signal are returned, the processor loads the PTE into the TB so that it can be used for a subsequent read operation directed to the data at the physical address.

    摘要翻译: 一种技术选择性地将由多处理器系统的处理器发出的存储器参考操作之间的参考间排序施加于与由翻译缓冲器(TB)错过流程程影响的页表项(PTE)相关的页面内的地址。 TB错误流被用于检索包含在PTE中的信息,用于将虚拟地址映射到物理地址,并且随后允许在映射的物理地址处检索数据。 响应于存储器引用(读取)操作检索的PTE不会被加载到TB中,直到与该读取操作相关联的提交信号返回到处理器。 一旦返回了PTE和相关联的提交信号,处理器将PTE加载到TB中,以便它可以用于针对物理地址的数据的后续读取操作。

    Transaction references for requests in a multi-processor network
    5.
    发明授权
    Transaction references for requests in a multi-processor network 失效
    多处理器网络中的请求的事务引用

    公开(公告)号:US07856534B2

    公开(公告)日:2010-12-21

    申请号:US10758352

    申请日:2004-01-15

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0828 G06F12/0831

    摘要: One disclosed embodiment may comprise a system that includes a home node that provides a transaction reference to a requester in response to a request from the requester. The requester provides an acknowledgement message to the home node in response to the transaction reference, the transaction reference enabling the requester to determine an order of requests at the home node relative to the request from the requester.

    摘要翻译: 一个公开的实施例可以包括系统,其包括家庭节点,其响应于来自请求者的请求向请求者提供事务参考。 请求者响应于事务参考向家庭节点提供确认消息,事务参考使得请求者能够相对于来自请求者的请求确定家庭节点处的请求的顺序。

    Channel-based late race resolution mechanism for a computer system
    7.
    发明授权
    Channel-based late race resolution mechanism for a computer system 有权
    基于通道的晚期种族解决机制的计算机系统

    公开(公告)号:US07000080B2

    公开(公告)日:2006-02-14

    申请号:US10263836

    申请日:2002-10-03

    IPC分类号: G06F12/00

    摘要: A channel-based mechanism resolves race conditions in a computer system between a first processor writing modified data back to memory and a second processor trying to obtain a copy of the modified data. In addition to a Q0 channel for carrying requests for data, a Q1 channel for carrying probes in response to Q0 requests, and a Q2 channel for carrying responses to Q0 requests, a new channel, the QWB channel, which has a higher priority than Q1 but lower than Q2, is also defined. When a forwarded Read command from the second processor results in a miss at the first processor's cache, because the requested memory block was written back to memory, a Loop command is issued to memory by the first processor on the QWB virtual channel. In response to the Loop command, memory sends the written back version of the memory block to the second processor.

    摘要翻译: 基于频道的机制解决计算机系统中的第一处理器将修改后的数据写回存储器和尝试获得修改的数据的副本的第二处理器之间的竞争条件。 除了用于携带对数据的请求的Q 0信道之外,还具有用于响应于Q 0请求携带探针的Q 1信道和用于对Q 0请求进行响应的Q 2信道,具有 优先于Q 1但低于Q 2的优先级也被定义。 当来自第二处理器的转发的Read命令导致第一处理器的高速缓存中的未命中时,由于所请求的存储器块被写回存储器,所以QWB虚拟通道上的第一处理器向存储器发出一个循环命令。 响应于循环命令,存储器将存储器块的写回版本发送到第二处理器。