SELF-AWARE, PEER-TO-PEER CACHE TRANSFERS BETWEEN LOCAL, SHARED CACHE MEMORIES IN A MULTI-PROCESSOR SYSTEM
    1.
    发明申请
    SELF-AWARE, PEER-TO-PEER CACHE TRANSFERS BETWEEN LOCAL, SHARED CACHE MEMORIES IN A MULTI-PROCESSOR SYSTEM 审中-公开
    自我意识,对等高速缓存在多处理器系统中的本地共享高速缓存存储器之间传输

    公开(公告)号:WO2017222791A1

    公开(公告)日:2017-12-28

    申请号:PCT/US2017/035905

    申请日:2017-06-05

    Abstract: Self-aware, peer-to-peer cache transfers between local, shared cache memories in a multi-processor system is disclosed. A shared cache memory system is provided comprising local shared cache memories accessible by an associated central processing unit (CPU) and other CPUs in a peer-to-peer manner. When a CPU desires to request a cache transfer (e.g., in response to a cache eviction), the CPU acting as a master CPU issues a cache transfer request. In response, target CPUs issue snoop responses indicating their willingness to accept the cache transfer. The target CPUs also use the snoop responses to be self-aware of the willingness of other target CPUs to accept the cache transfer. The target CPUs willing to accept the cache transfer use a predefined target CPU selection scheme to determine its acceptance of the cache transfer. This can avoid a CPU making multiple requests to find a target CPU for a cache transfer.

    Abstract translation: 本发明公开了在多处理器系统中的本地共享高速缓冲存储器之间的自我感知的对等高速缓存传输。 提供共享高速缓冲存储器系统,其包括由相关联的中央处理单元(CPU)和其他CPU以点对点方式访问的本地共享高速缓冲存储器。 当CPU期望请求高速缓存传输时(例如,响应于高速缓存驱逐),用作主CPU的CPU发出高速缓存传输请求。 作为响应,目标CPU发出侦听响应,表明他们愿意接受缓存传输。 目标CPU也使用侦听响应自我意识到其他目标CPU接受缓存传输的意愿。 愿意接受缓存传输的目标CPU使用预定义的目标CPU选择方案来确定其接受缓存传输。 这可以避免CPU发出多个请求来查找用于高速缓存传输的目标CPU。

    SELF-HEALING COARSE-GRAINED SNOOP FILTER
    3.
    发明申请
    SELF-HEALING COARSE-GRAINED SNOOP FILTER 审中-公开
    自愈式粗粒度雪崩过滤器

    公开(公告)号:WO2017155659A1

    公开(公告)日:2017-09-14

    申请号:PCT/US2017/017168

    申请日:2017-02-09

    Abstract: The disclosure relates to filtering snoops in coherent multiprocessor systems. For example, in response to a request to update a target memory location at a Level-2 (L2) cache shared among multiple local processing units each having a Level-1 (L1) cache, a lookup based on the target memory location may be performed in a snoop filter that tracks entries in the LI caches. If the lookup misses the snoop filter and the snoop filter lacks space to store a new entry, a victim entry to evict from the snoop filter may be selected and a request to invalidate every cache line that maps to the victim entry may be sent to at least one of the processing units with one or more cache lines that map to the victim entry. The victim entry may then be replaced in the snoop filter with the new entry corresponding to the target memory location.

    Abstract translation: 本公开涉及在相干多处理器系统中过滤窥探。 例如,响应于更新在各自具有Level-1(L1)高速缓存的多个本地处理单元之间共享的Level-2(L2)高速缓存上的目标存储器位置的请求,基于目标存储器位置的查找可以是 在侦听LI缓存中的条目的窥探过滤器中执行。 如果查找错过了探听过滤器并且探听过滤器没有空间来存储新条目,则可以选择要从探听过滤器驱逐的受害者条目并且使映射到受害者条目的每个缓存行无效的请求可以被发送到 至少一个处理单元具有映射到牺牲条目的一个或多个缓存行。 受害者条目然后可以在窥探过滤器中被替换为与目标存储器位置对应的新条目。

Patent Agency Ranking