BUFFERED INTERCONNECT FOR HIGHLY SCALABLE ON-DIE FABRIC

    公开(公告)号:US20190236038A1

    公开(公告)日:2019-08-01

    申请号:US16227364

    申请日:2018-12-20

    IPC分类号: G06F13/20 G06F13/40

    CPC分类号: G06F13/20 G06F13/4027

    摘要: Buffered interconnects for highly scalable on-die fabric and associated methods and apparatus. A plurality of nodes on a die are interconnected via an on-die fabric. The nodes and fabric are configured to implement forwarding of credited messages from source nodes to destination nodes using forwarding paths partitioned into a plurality of segments, wherein separate credit loops are implemented for each segment. Under one fabric configuration implementing an approach called multi-level crediting, the nodes are configured in a two-dimensional grid and messages are forwarded using vertical and horizontal segments, wherein a first segment is between a source node and a turn node in the same row or column and the second segment is between the turn node and a destination node. Under another approach called buffered mesh, buffering and credit management facilities are provided at each node and adjacent nodes are configured to implement credit loops for forwarding messages between the nodes. The fabrics may comprise various topologies, including 2D mesh topologies and ring interconnect structures. Moreover, multi-level crediting and buffered mesh may be used for forwarding messages across dies.

    METHOD, APPARATUS AND SYSTEM FOR OPTIMIZING CACHE MEMORY TRANSACTION HANDLING IN A PROCESSOR
    2.
    发明申请
    METHOD, APPARATUS AND SYSTEM FOR OPTIMIZING CACHE MEMORY TRANSACTION HANDLING IN A PROCESSOR 有权
    用于优化处理器中的高速缓存存储器交易处理的方法,装置和系统

    公开(公告)号:US20160283382A1

    公开(公告)日:2016-09-29

    申请号:US14669248

    申请日:2015-03-26

    IPC分类号: G06F12/08

    摘要: In one embodiment, a processor includes a caching home agent (CHA) coupled to a core and a cache memory and includes a cache controller having a cache pipeline and a home agent having a home agent pipeline. The CHA may: receive, in the home agent pipeline, information from an external agent responsive to a miss for data in the cache memory; issue a global ordering signal from the home agent pipeline to a requester of the data to inform the requester of receipt of the data; and report issuance of the global ordering signal to the cache pipeline, to prevent the cache pipeline from issuance of a global ordering signal to the requester. Other embodiments are described and claimed.

    摘要翻译: 在一个实施例中,处理器包括耦合到核心和高速缓冲存储器的缓存归属代理(CHA),并且包括具有高速缓存流水线和具有归属代理流水线的归属代理的高速缓存控制器。 CHA可以:在归属代理流程中接收来自外部代理的响应于高速缓冲存储器中的数据的遗漏的信息; 从归属代理流水线向数据的请求者发出全局排序信号,以通知请求者接收数据; 并向缓存流水线报告全局排序信号的发布,以防止缓存流水线向请求者发出全局排序信号。 描述和要求保护其他实施例。

    METHOD AND APPARATUS FOR DISTRIBUTED SNOOP FILTERING
    3.
    发明申请
    METHOD AND APPARATUS FOR DISTRIBUTED SNOOP FILTERING 有权
    用于分布式SNOOP过滤的方法和装置

    公开(公告)号:US20160092366A1

    公开(公告)日:2016-03-31

    申请号:US14497740

    申请日:2014-09-26

    IPC分类号: G06F12/08

    摘要: An apparatus and method are described for distributed snoop filtering. For example, one embodiment of a processor comprises: a plurality of cores to execute instructions and process data; first snoop logic to track a first plurality of cache lines stored in a mid-level cache (“MLC”) accessible by one or more of the cores, the first snoop logic to allocate entries for cache lines stored in the MLC and to deallocate entries for cache lines evicted from the MLC, wherein at least some of the cache lines evicted from the MLC are retained in a level 1 (L1) cache; and second snoop logic to track a second plurality of cache lines stored in a non-inclusive last level cache (NI LLC), the second snoop logic to allocate entries in the NI LLC for cache lines evicted from the MLC and to deallocate entries for cache lines stored in the MLC, wherein the second snoop logic is to store and maintain a first set of core valid bits to identify cores containing copies of the cache lines stored in the NI LLC.

    摘要翻译: 描述了一种用于分布式监听过滤的设备和方法。 例如,处理器的一个实施例包括:执行指令和处理数据的多个核; 第一侦听逻辑,用于跟踪存储在由一个或多个核可访问的中级高速缓存(“MLC”)中的第一多个高速缓存行,所述第一侦听逻辑用于为存储在所述MLC中的高速缓存行分配条目,并且取消分配条目 对于从MLC移出的高速缓存行,其中从MLC中逐出的至少一些高速缓存行保留在级别1(L1)高速缓存中; 和第二窥探逻辑,用于跟踪存储在非包容性最后一级高速缓存(NI LLC)中的第二多个高速缓存行,所述第二监听逻辑用于在所述NI LLC中分配用于从所述MLC驱逐的高速缓存行的条目,并且取消分配用于高速缓存的条目 存储在MLC中的行,其中第二侦听逻辑将存储和维护第一组核心有效位以识别包含存储在NI LLC中的高速缓存行的副本的内核。

    Apparatus, system, and methods for facilitating one-way ordering of messages
    7.
    发明授权
    Apparatus, system, and methods for facilitating one-way ordering of messages 有权
    用于促进消息的单向排序的装置,系统和方法

    公开(公告)号:US08554851B2

    公开(公告)日:2013-10-08

    申请号:US12889802

    申请日:2010-09-24

    IPC分类号: G06F15/16

    CPC分类号: H04L67/10 G06F15/17325

    摘要: Methods, apparatus and systems for facilitating one-way ordering of otherwise independent message classes. A one-way message ordering mechanism facilitates one-way ordering of messages of different message classes sent between interconnects employing independent pathways for the message classes. In one aspect, messages of a second message class may not pass messages of a first message class. Moreover, when messages of the first and second classes are received in sequence, the ordering mechanism ensures that messages of the first class are forwarded to, and received at, a next hop prior to forwarding messages of the second class.

    摘要翻译: 用于促进独立消息类别的单向排序的方法,装置和系统。 单向消息排序机制有助于在使用消息类的独立路径的互连之间发送的不同消息类别的消息的单向排序。 在一个方面,第二消息类的消息可能不会传递第一消息类的消息。 此外,当顺序地接收到第一类和第二类的消息时,排序机制确保在转发第二类的消息之前将第一类的消息转发到下一跳并在其中接收。