DISTRIBUTED FAIRNESS PROTOCOL FOR INTERCONNECT NETWORKS

    公开(公告)号:US20200073835A1

    公开(公告)日:2020-03-05

    申请号:US16678199

    申请日:2019-11-08

    Abstract: A system is disclosed, including a plurality of access units, a plurality of circuit nodes each coupled to a respective access unit, and a plurality of data processing nodes each coupled to a respective access unit. A particular data processing node may be configured to generate a plurality of data transactions. The particular data processing node may also be configured to determine an availability of a coupled access unit. In response to a determination that the coupled access unit is unavailable, the particular data processing node may be configured to halt a transfer of the plurality of data transactions to the coupled access unit and assert a halt indicator signal. In response to a determination that the coupled access unit is available, the particular data processing node may be configured to transfer the particular data transaction to the coupled access unit.

    Storage, access, and management of random numbers generated by a central random number generator and dispensed to hardware threads of cores

    公开(公告)号:US09971565B2

    公开(公告)日:2018-05-15

    申请号:US14706213

    申请日:2015-05-07

    CPC classification number: G06F7/58 G06F7/582

    Abstract: Random numbers within a processor may be scarce, especially when multiple hardware threads are consuming them. A local random number buffer can be used by an execution core to better manage allocation and consumption of random numbers. The buffer may operate in a number of modes, and allow any hardware thread to use a random number under some conditions. In other conditions, only certain hardware threads may be allowed to consume a random number. The local random number buffer may have a dynamic pool of entries usable by any hardware thread, as well as reserved entries usable by only particular hardware threads. Further, a user-level instruction is disclosed that can be stored in a wait queue in response to a random number being unavailable, rather than having the instruction's request for a random number simply be denied. The random number buffer may also boost performance and reduce latency.

    Distributed fairness protocol for interconnect networks

    公开(公告)号:US10474601B2

    公开(公告)日:2019-11-12

    申请号:US15425025

    申请日:2017-02-06

    Abstract: A system is disclosed, including a plurality of access units, a plurality of circuit nodes each coupled to a respective access unit, and a plurality of data processing nodes each coupled to a respective access unit. A particular data processing node may be configured to generate a plurality of data transactions. The particular data processing node may also be configured to determine an availability of a coupled access unit. In response to a determination that the coupled access unit is unavailable, the particular data processing node may be configured to halt a transfer of the plurality of data transactions to the coupled access unit and assert a halt indicator signal. In response to a determination that the coupled access unit is available, the particular data processing node may be configured to transfer the particular data transaction to the coupled access unit.

    DISTRIBUTED FAIRNESS PROTOCOL FOR INTERCONNECT NETWORKS

    公开(公告)号:US20180225239A1

    公开(公告)日:2018-08-09

    申请号:US15425025

    申请日:2017-02-06

    CPC classification number: G06F13/36 G06F13/24 G06F13/4068

    Abstract: A system is disclosed, including a plurality of access units, a plurality of circuit nodes each coupled to a respective access unit, and a plurality of data processing nodes each coupled to a respective access unit. A particular data processing node may be configured to generate a plurality of data transactions. The particular data processing node may also be configured to determine an availability of a coupled access unit. In response to a determination that the coupled access unit is unavailable, the particular data processing node may be configured to halt a transfer of the plurality of data transactions to the coupled access unit and assert a halt indicator signal. In response to a determination that the coupled access unit is available, the particular data processing node may be configured to transfer the particular data transaction to the coupled access unit.

    RANDOM NUMBER STORAGE, ACCESS, AND MANAGEMENT
    6.
    发明申请
    RANDOM NUMBER STORAGE, ACCESS, AND MANAGEMENT 有权
    随机数存储,访问和管理

    公开(公告)号:US20160328209A1

    公开(公告)日:2016-11-10

    申请号:US14706213

    申请日:2015-05-07

    CPC classification number: G06F7/58 G06F7/582

    Abstract: Random numbers within a processor may be scarce, especially when multiple hardware threads are consuming them. A local random number buffer can be used by an execution core to better manage allocation and consumption of random numbers. The buffer may operate in a number of modes, and allow any hardware thread to use a random number under some conditions. In other conditions, only certain hardware threads may be allowed to consume a random number. The local random number buffer may have a dynamic pool of entries usable by any hardware thread, as well as reserved entries usable by only particular hardware threads. Further, a user-level instruction is disclosed that can be stored in a wait queue in response to a random number being unavailable, rather than having the instruction's request for a random number simply be denied. The random number buffer may also boost performance and reduce latency.

    Abstract translation: 处理器内的随机数可能很少,特别是当多个硬件线程消耗它们时。 执行核心可以使用本地随机数缓冲区来更好地管理随机数的分配和消耗。 缓冲器可以以多种模式操作,并且允许任何硬件线程在某些条件下使用随机数。 在其他条件下,只允许某些硬件线程使用随机数。 本地随机数缓冲器可以具有可由任何硬件线程使用的条目的动态池,以及仅由特定硬件线程使用的保留条目。 此外,公开了可以响应于随机数不可用而存储在等待队列中的用户级指令,而不是简单地拒绝指令对随机数的请求。 随机数缓冲器也可以提高性能并减少延迟。

    Non-Temporal Write Combining Using Cache Resources
    7.
    发明申请
    Non-Temporal Write Combining Using Cache Resources 有权
    使用缓存资源的非时间写入组合

    公开(公告)号:US20160314069A1

    公开(公告)日:2016-10-27

    申请号:US14691971

    申请日:2015-04-21

    Abstract: A method and apparatus for performing non-temporal write combining using existing cache resources is disclosed. In one embodiment, a method includes executing a first thread on a processor core, the first thread including a first block initialization store (BIS) instruction. A cache query may be performed responsive to the BIS instruction, and if the query results in a cache miss, a cache line may be installed in a cache in an unordered dirty state in which it is exclusively owned by the first thread. The first BIS instruction and one or more additional BIS instructions may write data from the first processor core into the first cache line. After a cache coherence response is received, the state of the first cache line may be changed to an ordered dirty state in which it is no longer exclusive to the first thread.

    Abstract translation: 公开了一种使用现有高速缓存资源执行非时间写入组合的方法和装置。 在一个实施例中,一种方法包括执行处理器核心上的第一线程,第一线程包括第一块初始化存储(BIS)指令。 可以响应于BIS指令执行缓存查询,并且如果查询导致高速缓存未命中,则高速缓存行可以以无序的脏状态安装在高速缓存中,其中它是由第一线程专有的。 第一BIS指令和一个或多个附加BIS指令可以将数据从第一处理器核心写入第一高速缓存行。 在接收到高速缓存一致性响应之后,可以将第一高速缓存行的状态改变为不再对第一线程排斥的有序脏状态。

    Distributed fairness protocol for interconnect networks

    公开(公告)号:US11126577B2

    公开(公告)日:2021-09-21

    申请号:US16678199

    申请日:2019-11-08

    Abstract: A system is disclosed, including a plurality of access units, a plurality of circuit nodes each coupled to a respective access unit, and a plurality of data processing nodes each coupled to a respective access unit. A particular data processing node may be configured to generate a plurality of data transactions. The particular data processing node may also be configured to determine an availability of a coupled access unit. In response to a determination that the coupled access unit is unavailable, the particular data processing node may be configured to halt a transfer of the plurality of data transactions to the coupled access unit and assert a halt indicator signal. In response to a determination that the coupled access unit is available, the particular data processing node may be configured to transfer the particular data transaction to the coupled access unit.

Patent Agency Ranking