Probe speculative address file
    12.
    发明授权
    Probe speculative address file 失效
    探测推测地址文件

    公开(公告)号:US08438335B2

    公开(公告)日:2013-05-07

    申请号:US12892476

    申请日:2010-09-28

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0815 G06F2212/507

    摘要: An apparatus to resolve cache coherency is presented. In one embodiment, the apparatus includes a microprocessor comprising one or more processing cores. The apparatus also includes a probe speculative address file unit, coupled to a cache memory, comprising a plurality of entries. Each entry includes a timer and a tag associated with a memory line. The apparatus further includes control logic to determine whether to service an incoming probe based at least in part on a timer value.

    摘要翻译: 提出了一种解决高速缓存一致性的设备。 在一个实施例中,该装置包括具有一个或多个处理核心的微处理器。 该装置还包括耦合到高速缓冲存储器的探测推测地址文件单元,包括多个条目。 每个条目包括定时器和与存储器线相关联的标签。 该装置还包括至少部分地基于定时器值来确定是否对入站探测器进行服务的控制逻辑。

    Cache spill management techniques using cache spill prediction
    13.
    发明授权
    Cache spill management techniques using cache spill prediction 失效
    缓存溢出管理技术使用缓存溢出预测

    公开(公告)号:US08407421B2

    公开(公告)日:2013-03-26

    申请号:US12639214

    申请日:2009-12-16

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0806 G06F12/12

    摘要: An apparatus and method is described herein for intelligently spilling cache lines. Usefulness of cache lines previously spilled from a source cache is learned, such that later evictions of useful cache lines from a source cache are intelligently selected for spill. Furthermore, another learning mechanism—cache spill prediction—may be implemented separately or in conjunction with usefulness prediction. The cache spill prediction is capable of learning the effectiveness of remote caches at holding spilled cache lines for the source cache. As a result, cache lines are capable of being intelligently selected for spill and intelligently distributed among remote caches based on the effectiveness of each remote cache in holding spilled cache lines for the source cache.

    摘要翻译: 这里描述了用于智能地溢出高速缓存行的装置和方法。 了解先前从源缓存溢出的高速缓存行的有用性,从而智能地选择来自源缓存的随后驱逐的溢出。 此外,另一种学习机制 - 缓存溢出预测 - 可以单独实施或结合有用性预测来实现。 高速缓存溢出预测能够学习在为源缓存保留溢出的高速缓存行时远程高速缓存的有效性。 因此,基于每个远程高速缓存在保存用于源高速缓存的溢出高速缓存行的有效性的情况下,高速缓存行能够被智能地选择为溢出并且智能地分布在远程高速缓存中。

    Systems and methods for executing across at least one memory barrier employing speculative fills
    14.
    发明授权
    Systems and methods for executing across at least one memory barrier employing speculative fills 有权
    通过使用投机填充的至少一个记忆障碍执行的系统和方法

    公开(公告)号:US07360069B2

    公开(公告)日:2008-04-15

    申请号:US10756639

    申请日:2004-01-13

    IPC分类号: G06F9/00

    摘要: Multi-processor systems and methods are provided. One embodiment relates to a multi-processor system that may comprise a processor having a processor pipeline that executes program instructions across at least one memory barrier with data from speculative data fills that are provided in response to source requests, and a log that retains executed load instruction entries associated with executed program instruction. The executed load instruction entries may be retired if a cache line associated with data of the speculative data fill has not been invalidated in an epoch that is different from the epoch in which the executed load instruction is executed.

    摘要翻译: 提供多处理器系统和方法。 一个实施例涉及一种多处理器系统,其可以包括具有处理器流水线的处理器,处理器流水线通过至少一个存储器障碍执行程序指令,其中数据来自响应于源请求而提供的推测数据填充,以及保留执行负载的日志 与执行的程序指令相关联的指令条目。 如果在与执行的执行加载指令的历元不同的时期中,与推测数据填充的数据相关联的高速缓存行没有被无效,那么执行的加载指令条目可能会被停止。

    Mechanism for selectively imposing interference order between page-table fetches and corresponding data fetches
    15.
    发明授权
    Mechanism for selectively imposing interference order between page-table fetches and corresponding data fetches 失效
    选择性地强制页表提取之间的干扰顺序和相应数据提取的机制

    公开(公告)号:US06286090B1

    公开(公告)日:2001-09-04

    申请号:US09084621

    申请日:1998-05-26

    IPC分类号: G06F1200

    CPC分类号: G06F12/1054 G06F12/0813

    摘要: A technique selectively imposes inter-reference ordering between memory reference operations issued by a processor of a multiprocessor system to addresses within a page pertaining to a page table entry (PTE) that is affected by a translation buffer (TB) miss flow routine. The TB miss flow is used to retrieve information contained in the PTE for mapping a virtual address to a physical address and, subsequently, to allow retrieval of data at the mapped physical address. The PTE that is retrieved in response to a memory reference (read) operation is not loaded into the TB until a commit-signal associated with that read operation is returned to the processor. Once the PTE and associated commit-signal are returned, the processor loads the PTE into the TB so that it can be used for a subsequent read operation directed to the data at the physical address.

    摘要翻译: 一种技术选择性地将由多处理器系统的处理器发出的存储器参考操作之间的参考间排序施加于与由翻译缓冲器(TB)错过流程程影响的页表项(PTE)相关的页面内的地址。 TB错误流被用于检索包含在PTE中的信息,用于将虚拟地址映射到物理地址,并且随后允许在映射的物理地址处检索数据。 响应于存储器引用(读取)操作检索的PTE不会被加载到TB中,直到与该读取操作相关联的提交信号返回到处理器。 一旦返回了PTE和相关联的提交信号,处理器将PTE加载到TB中,以便它可以用于针对物理地址的数据的后续读取操作。

    High performance recoverable communication method and apparatus for
write-only networks
    16.
    发明授权
    High performance recoverable communication method and apparatus for write-only networks 失效
    用于只写网络的高性能可恢复通信方法和装置

    公开(公告)号:US6049889A

    公开(公告)日:2000-04-11

    申请号:US6115

    申请日:1998-01-13

    IPC分类号: H04L29/06 H04L29/14 G06F3/00

    CPC分类号: H04L29/06 H04L69/40

    摘要: A multi-node computer network includes a plurality of nodes coupled together via a data link. Each of the nodes includes a local memory, which further comprises a shared memory. Certain items of data that are to be shared by the nodes are stored in the shared portion of memory. Associated with each of the shared data items is a data structure. When a node sharing data with other nodes in the system seeks to modify the data, it transmits the modifications over the data link to the other nodes in the network. Each update is received in order by each node in the cluster. As part of the last transmission by the modifying node, an acknowledgement request is sent to the receiving nodes in the cluster. Each node that receives the acknowledgment request returns an acknowledgement to the sending node. The returned acknowledgement is written to the data structure associated with the shared data item. If there is an error during the transmission of the message, the receiving node does not transmit an acknowledgement, and the sending node is thereby notified that an error has occurred.

    摘要翻译: 多节点计算机网络包括通过数据链路耦合在一起的多个节点。 每个节点包括本地存储器,其还包括共享存储器。 要由节点共享的某些数据项存储在存储器的共享部分中。 与每个共享数据项相关联的是数据结构。 当与系统中的其他节点共享数据的节点寻求修改数据时,它将数据链路上的修改发送到网络中的其他节点。 群集中的每个节点按顺序接收每个更新。 作为修改节点的最后一次传输的一部分,向群集中的接收节点发送确认请求。 接收确认请求的每个节点向发送节点返回确认。 返回的确认被写入与共享数据项相关联的数据结构。 如果在消息的发送期间存在错误,则接收节点不发送确认,并且由此通知发送节点发生了错误。

    Multi-index multi-way set-associative cache
    17.
    发明授权
    Multi-index multi-way set-associative cache 失效
    多索引多路组合关联缓存

    公开(公告)号:US5509135A

    公开(公告)日:1996-04-16

    申请号:US951623

    申请日:1992-09-25

    IPC分类号: G06F12/08 G06F13/00

    CPC分类号: G06F12/0864

    摘要: A plurality of indexes are provided for a multi-way set-associate cache of a computer system. The cache is organized as a plurality of blocks for storing data which are a copies of main memory data. Each block has an associated tag for uniquely identifying the block. The blocks and the tags are addressed by indexes. The indexes are generated by a Boolean hashing function which converts a memory address to cache indexes by combining the bits of the memory address using an exclusive OR function. Different combination of bits are used to generate a plurality of different indexes to address the tags and the associated blocks to transfer data between the cache and the central processing unit of the computer system.

    摘要翻译: 为计算机系统的多路集合相关缓存提供多个索引。 高速缓存被组织为用于存储作为主存储器数据的副本的数据的多个块。 每个块都具有用于唯一标识块的关联标签。 块和标签由索引寻址。 索引由布尔散列函数生成,该函数通过使用异或函数组合存储器地址的位来将存储器地址转换为缓存索引。 使用不同的比特组合来生成多个不同的索引以寻址标签和相关联的块以在计算机系统的高速缓存和中央处理单元之间传送数据。

    Set prediction cache memory system using bits of the main memory address
    18.
    发明授权
    Set prediction cache memory system using bits of the main memory address 失效
    使用主存储器地址的位设置预测高速缓存存储器系统

    公开(公告)号:US5235697A

    公开(公告)日:1993-08-10

    申请号:US956827

    申请日:1992-10-05

    IPC分类号: G06F12/08

    CPC分类号: G06F12/0864 G06F2212/6082

    摘要: The set-prediction cache memory system comprises an extension of a set-associative cache memory system which operates in parallel to the set-associative structure to increase the overall speed of the cache memory while maintaining its performance. The set prediction cache memory system includes a plurality of data RAMs and a plurality of tag RAMs to store data and data tags, respectively. Also included in the system are tag store comparators to compare the tag data contained in a specific tag RAM location with a second index comprising a predetermined second portion of a main memory address. The elements of the set prediction cache memory system which operate in parallel to the set-associative cache memory include: a set-prediction RAM which receives at least one third index comprising a predetermined third portion of the main memory address, and stores such third index to essentially predict the data cache RAM holding the data indexed by the third index; a data-select multiplexer which receives the prediction index and selects a data output from the data cache RAM indexed by the prediction index; and a mispredict logic device to determine if the set prediction RAM predicted the correct data cache RAM and if not, issue a mispredict signal which may comprise a write data signal, the write data signal containing information intended to correct the prediction index contained in the set prediction RAM.

    摘要翻译: 设置预测高速缓冲存储器系统包括与集合关联结构并行操作的集合关联高速缓冲存储器系统的扩展,以在保持其性能的同时增加高速缓冲存储器的总体速度。 集合预测高速缓冲存储器系统包括分别存储数据和数据标签的多个数据RAM和多个标签RAM。 还包括在系统中的标签存储比较器,用于将包含在特定标签RAM位置中的标签数据与包含主存储器地址的预定第二部分的第二索引进行比较。 与设置关联高速缓存存储器并行操作的集合预测高速缓冲存储器系统的元件包括:设置预测RAM,其接收包含主存储器地址的预定第三部分的至少一个第三索引,并存储这样的第三索引 以基本预测由第三指标索引的数据的数据缓存RAM; 数据选择多路复用器,其接收预测索引并选择从由预测索引索引的数据高速缓存RAM输出的数据; 以及用于确定所设置的预测RAM是否预测正确的数据高速缓存RAM的错误预测逻辑设备,如果不是,则发出可能包括写入数据信号的错误预测信号,所述写入数据信号包含旨在校正包含在该组中的预测索引的信息 预测RAM。

    Transaction references for requests in a multi-processor network
    19.
    发明授权
    Transaction references for requests in a multi-processor network 失效
    多处理器网络中的请求的事务引用

    公开(公告)号:US07856534B2

    公开(公告)日:2010-12-21

    申请号:US10758352

    申请日:2004-01-15

    IPC分类号: G06F12/00

    CPC分类号: G06F12/0828 G06F12/0831

    摘要: One disclosed embodiment may comprise a system that includes a home node that provides a transaction reference to a requester in response to a request from the requester. The requester provides an acknowledgement message to the home node in response to the transaction reference, the transaction reference enabling the requester to determine an order of requests at the home node relative to the request from the requester.

    摘要翻译: 一个公开的实施例可以包括系统,其包括家庭节点,其响应于来自请求者的请求向请求者提供事务参考。 请求者响应于事务参考向家庭节点提供确认消息,事务参考使得请求者能够相对于来自请求者的请求确定家庭节点处的请求的顺序。

    Source request arbitration
    20.
    发明授权
    Source request arbitration 有权
    源请求仲裁

    公开(公告)号:US07340565B2

    公开(公告)日:2008-03-04

    申请号:US10755919

    申请日:2004-01-13

    IPC分类号: G06F9/00 G06F9/38 G06F13/00

    摘要: Multiprocessor systems and methods are disclosed. One embodiment may comprise a plurality of processor cores. A given processor core may be operative to generate a request for desired data in response to a cache miss at a local cache. A shared cache structure may provide at least one speculative data fill and a coherent data fill of the desired data to at least one of the plurality of processor cores in response to a request from the at least one processor core. A processor scoreboard arbitrates the requests for the desired data. A speculative data fill of the desired data is provided to the at least one processor core. The coherent data fill of the desired data may be provided to the at least one processor core in a determined order.

    摘要翻译: 公开了多处理器系统和方法。 一个实施例可以包括多个处理器核。 给定的处理器核心可以用于响应于本地高速缓存处的高速缓存未命中而产生对期望数据的请求。 响应于来自至少一个处理器核心的请求,共享高速缓存结构可以向所述多个处理器核心中的至少一个提供期望数据的至少一个推测数据填充和相干数据填充。 处理器记分板对所需数据的请求进行仲裁。 将所需数据的推测数据填充提供给至少一个处理器核。 期望数据的相干数据填充可以以确定的顺序提供给至少一个处理器核心。