PROVIDING SCALABLE DYNAMIC RANDOM ACCESS MEMORY (DRAM) CACHE MANAGEMENT USING TAG DIRECTORY CACHES
    1.
    发明申请
    PROVIDING SCALABLE DYNAMIC RANDOM ACCESS MEMORY (DRAM) CACHE MANAGEMENT USING TAG DIRECTORY CACHES 审中-公开
    使用标签目录缓存提供可扩展的动态随机访问存储器(DRAM)缓存管理

    公开(公告)号:WO2017127196A1

    公开(公告)日:2017-07-27

    申请号:PCT/US2016/067532

    申请日:2016-12-19

    Abstract: Providing scalable dynamic random access memory (DRAM) cache management using tag directory caches is provided. In one aspect, a DRAM cache management circuit is provided to manage access to a DRAM cache in a high-bandwidth memory. The DRAM cache management circuit comprises a tag directory cache and a tag directory cache directory. The tag directory cache stores tags of frequently accessed cache lines in the DRAM cache, while the tag directory cache directory stores tags for the tag directory cache. The DRAM cache management circuit uses the tag directory cache and the tag directory cache directory to determine whether data associated with a memory address is cached in the DRAM cache of the high-bandwidth memory. Based on the tag directory cache and the tag directory cache directory, the DRAM cache management circuit may determine whether a memory operation can be performed using the DRAM cache and/or a system memory DRAM.

    Abstract translation: 提供了使用标签目录高速缓存提供可伸缩动态随机存取存储器(DRAM)高速缓存管理。 在一个方面,提供DRAM高速缓存管理电路来管理对高带宽存储器中的DRAM高速缓存的访问。 DRAM高速缓存管理电路包括标签目录高速缓存和标签目录高速缓存目录。 标签目录高速缓存将经常访问的高速缓存行的标签存储在DRAM高速缓存中,而标签目录高速缓存目录存储标签目录高速缓存的标签。 DRAM高速缓存管理电路使用标签目录高速缓存和标签目录高速缓存目录来确定与存储器地址相关联的数据是否被高速缓存在高带宽存储器的DRAM高速缓存中。 基于标签目录高速缓存和标签目录高速缓存目录,DRAM高速缓存管理电路可以确定是否可以使用DRAM高速缓存和/或系统存储器DRAM来执行存储器操作。

    SELF-AWARE, PEER-TO-PEER CACHE TRANSFERS BETWEEN LOCAL, SHARED CACHE MEMORIES IN A MULTI-PROCESSOR SYSTEM
    2.
    发明申请
    SELF-AWARE, PEER-TO-PEER CACHE TRANSFERS BETWEEN LOCAL, SHARED CACHE MEMORIES IN A MULTI-PROCESSOR SYSTEM 审中-公开
    自我意识,对等高速缓存在多处理器系统中的本地共享高速缓存存储器之间传输

    公开(公告)号:WO2017222791A1

    公开(公告)日:2017-12-28

    申请号:PCT/US2017/035905

    申请日:2017-06-05

    Abstract: Self-aware, peer-to-peer cache transfers between local, shared cache memories in a multi-processor system is disclosed. A shared cache memory system is provided comprising local shared cache memories accessible by an associated central processing unit (CPU) and other CPUs in a peer-to-peer manner. When a CPU desires to request a cache transfer (e.g., in response to a cache eviction), the CPU acting as a master CPU issues a cache transfer request. In response, target CPUs issue snoop responses indicating their willingness to accept the cache transfer. The target CPUs also use the snoop responses to be self-aware of the willingness of other target CPUs to accept the cache transfer. The target CPUs willing to accept the cache transfer use a predefined target CPU selection scheme to determine its acceptance of the cache transfer. This can avoid a CPU making multiple requests to find a target CPU for a cache transfer.

    Abstract translation: 本发明公开了在多处理器系统中的本地共享高速缓冲存储器之间的自我感知的对等高速缓存传输。 提供共享高速缓冲存储器系统,其包括由相关联的中央处理单元(CPU)和其他CPU以点对点方式访问的本地共享高速缓冲存储器。 当CPU期望请求高速缓存传输时(例如,响应于高速缓存驱逐),用作主CPU的CPU发出高速缓存传输请求。 作为响应,目标CPU发出侦听响应,表明他们愿意接受缓存传输。 目标CPU也使用侦听响应自我意识到其他目标CPU接受缓存传输的意愿。 愿意接受缓存传输的目标CPU使用预定义的目标CPU选择方案来确定其接受缓存传输。 这可以避免CPU发出多个请求来查找用于高速缓存传输的目标CPU。

    MAINTAINING CACHE COHERENCY USING CONDITIONAL INTERVENTION AMONG MULTIPLE MASTER DEVICES
    3.
    发明申请
    MAINTAINING CACHE COHERENCY USING CONDITIONAL INTERVENTION AMONG MULTIPLE MASTER DEVICES 审中-公开
    使用多个主设备的条件干预维护高速缓存

    公开(公告)号:WO2017053087A1

    公开(公告)日:2017-03-30

    申请号:PCT/US2016/050987

    申请日:2016-09-09

    Abstract: Maintaining cache coherency using conditional intervention among multiple master devices is disclosed. In one aspect, a conditional intervention circuit is configured to receive intervention responses from multiple snooping master devices. To select a snooping master device to provide intervention data, the conditional intervention circuit determines how many snooping master devices have a cache line granule size the same as or larger than a requesting master device. If one snooping master device has a same or larger cache line granule size, that snooping master device is selected. If more than one snooping master device has a same or larger cache line granule size, a snooping master device is selected based on an alternate criteria. The intervention responses provided by the unselected snooping master devices are canceled by the conditional intervention circuit, and intervention data from the selected snooping master device is provided to the requesting master device.

    Abstract translation: 公开了使用多个主设备之间的条件干预来维护高速缓存一致性。 在一个方面,条件干预电路被配置为从多个窥探主设备接收干预响应。 为了选择窥探主设备来提供干预数据,条件干预电路确定有多少个窥探主设备具有与请求主设备相同或更大的高速缓存线粒度大小。 如果一个侦听主设备具有相同或更大的缓存线粒度大小,则选择该窥探主设备。 如果多个侦听主设备具有相同或更大的缓存线粒度大小,则会根据备用标准选择侦听主设备。 由未选择的窥探主设备提供的干预响应由条件干预电路取消,并且来自所选窥探主设备的干预数据被提供给请求主设备。

    AVOIDING DEADLOCKS IN PROCESSOR-BASED SYSTEMS EMPLOYING RETRY AND IN-ORDER-RESPONSE NON-RETRY BUS COHERENCY PROTOCOLS
    4.
    发明申请
    AVOIDING DEADLOCKS IN PROCESSOR-BASED SYSTEMS EMPLOYING RETRY AND IN-ORDER-RESPONSE NON-RETRY BUS COHERENCY PROTOCOLS 审中-公开
    在基于处理器的系统中避免死机使用重试和订单响应非重试总线协议协议

    公开(公告)号:WO2017053086A1

    公开(公告)日:2017-03-30

    申请号:PCT/US2016/050961

    申请日:2016-09-09

    Abstract: Aspects disclosed herein include avoiding deadlocks in processor-based systems employing retry and in-order-response non-retry bus coherency protocols. In this regard, an interface bridge circuit is communicatively coupled to a first core device that implements a retry bus coherency protocol, and a second core device that implements an in-order-response non-retry bus coherency protocol. The interface bridge circuit receives a snoop command from the first core device, and forwards the snoop command to the second core device. While the snoop command is pending, the interface bridge circuit detects a potential deadlock condition between the first core device and the second core device. In response to detecting the potential deadlock condition, the interface bridge circuit is configured to send a retry response to the first core device. This enables the first core device to continue processing, thereby eliminating the potential deadlock condition.

    Abstract translation: 本文公开的方面包括在采用重试和有序响应非重试总线一致性协议的基于处理器的系统中避免死锁。 在这方面,接口桥电路通信地耦合到实现重试总线一致性协议的第一核心设备,以及实现按顺序响应非重试总线一致性协议的第二核心设备。 接口桥电路从第一核心设备接收窥探命令,并将侦听命令转发给第二核心设备。 当snoop命令待处理时,接口桥电路检测第一核心设备和第二核心设备之间的潜在死锁状态。 响应于检测到潜在的死锁状态,接口桥电路被配置为向第一核心设备发送重试响应。 这使得第一核心设备能够继续处理,从而消除潜在的死锁状况。

    BRIDGING STRONGLY ORDERED WRITE TRANSACTIONS TO DEVICES IN WEAKLY ORDERED DOMAINS, AND RELATED APPARATUSES, METHODS, AND COMPUTER-READABLE MEDIA
    5.
    发明申请
    BRIDGING STRONGLY ORDERED WRITE TRANSACTIONS TO DEVICES IN WEAKLY ORDERED DOMAINS, AND RELATED APPARATUSES, METHODS, AND COMPUTER-READABLE MEDIA 审中-公开
    为弱信号域中的设备制定强有力的写入交易,以及相关的设备,方法和计算机可读介质

    公开(公告)号:WO2016040034A1

    公开(公告)日:2016-03-17

    申请号:PCT/US2015/047727

    申请日:2015-08-31

    Abstract: Bridging strongly ordered write transactions to devices in weakly ordered domains, and related apparatuses, methods, and computer-readable media are disclosed. In one aspect, a host bridge device is configured to receive strongly ordered write transactions from one or more strongly ordered producer devices. The host bridge device issues the strongly ordered write transactions to one or more consumer devices within a weakly ordered domain. The host bridge device detects a first write transaction that is not accepted by a first consumer device of the one or more consumer devices. For each of one or more write transactions issued subsequent to the first write transaction and accepted by a respective consumer device, the host bridge device sends a cancellation message to the respective consumer device. The host bridge device replays the first write transaction and the one or more write transactions that were issued subsequent to the first write transaction.

    Abstract translation: 公布了对弱排序域中的设备进行强有序的写入事务,并且公开了相关设备,方法和计算机可读介质。 在一个方面,主桥设备被配置为从一个或多个强排序的生成器设备接收强有序的写事务。 主桥设备向弱排序域中的一个或多个消费者设备发出强有序的写事务。 主桥设备检测不被一个或多个消费者设备的第一消费者设备接受的第一写事务。 对于在第一写入事务之后发出并由相应的消费者设备接受的一个或多个写入事务中的每一个,主桥设备向相应的消费者设备发送取消消息。 主桥设备重播第一个写入事务以及在第一次写入事务之后发出的一个或多个写入事务。

Patent Agency Ranking