EVENT DRIVEN AND TIME HOPPING NEURAL NETWORK
    43.
    发明申请

    公开(公告)号:US20180189648A1

    公开(公告)日:2018-07-05

    申请号:US15394976

    申请日:2016-12-30

    CPC classification number: G06N3/08 G06N3/049

    Abstract: In one embodiment, a processor is to store a membrane potential of a neural unit of a neural network; and calculate, at a particular time-step of the neural network, a change to the membrane potential of the neural unit occurring over multiple time-steps that have elapsed since the last time-step at which the membrane potential was updated, wherein each of the multiple time-steps that have elapsed since the last time-step is associated with at least one input to the neural unit that affects the membrane potential of the neural unit.

    ADAPTIVELY SWITCHED NETWORK-ON-CHIP
    46.
    发明申请
    ADAPTIVELY SWITCHED NETWORK-ON-CHIP 有权
    适应性切换的网络芯片

    公开(公告)号:US20160182405A1

    公开(公告)日:2016-06-23

    申请号:US14579729

    申请日:2014-12-22

    CPC classification number: H04L49/109 H04L45/60 H04L49/60

    Abstract: A packet-switched reservation request to be associated with a first data stream is received. A communication mode is selected. The communication mode is to be either a circuit-switched mode or a packet-switched mode. At least a portion of the first data stream is communicated in accordance with the communication mode.

    Abstract translation: 接收与第一数据流相关联的分组交换预约请求。 选择通信模式。 通信模式是电路交换模式或分组交换模式。 根据通信模式来传送第一数据流的至少一部分。

    SYSTEM FOR MULTICAST AND REDUCTION COMMUNICATIONS ON A NETWORK-ON-CHIP
    47.
    发明申请
    SYSTEM FOR MULTICAST AND REDUCTION COMMUNICATIONS ON A NETWORK-ON-CHIP 有权
    网络上的多播和减少通信系统

    公开(公告)号:US20160182245A1

    公开(公告)日:2016-06-23

    申请号:US14574294

    申请日:2014-12-17

    CPC classification number: H04L12/1886 H04L45/16 H04L49/109 H04L49/3009

    Abstract: A multicast message that is to originate from a source is received. The multicast message comprises an identifier. A plurality of directions in which the multicast message is to fork at the router are stored. A plurality of messages from the directions in which the multicast message is to fork are received. The received messages are to comprise the identifier. The plurality of messages are aggregated into an aggregate message and sent towards the source.

    Abstract translation: 将收到源自源的多播消息。 多播消息包括标识符。 存储多路传输消息在路由器上分叉的多个方向。 接收从多播消息到叉的方向的多个消息。 所接收的消息将包括标识符。 多个消息被聚合成聚合消息并被发送给源。

    POINTER CHASING ACROSS DISTRIBUTED MEMORY
    48.
    发明申请
    POINTER CHASING ACROSS DISTRIBUTED MEMORY 有权
    点对点分配的记忆

    公开(公告)号:US20160179670A1

    公开(公告)日:2016-06-23

    申请号:US14573968

    申请日:2014-12-17

    Abstract: A first pointer dereferencer receives a location of a portion of a first node of a data structure. The first node is to be stored in a first storage element. A first pointer is obtained from the first node of the data structure. A location of a portion of a second node of the data structure is determined based on the first pointer. The second node is to be stored in a second storage element. The location of the portion of the second node of the data structure is sent to a second pointer dereferencer that is to access the portion of the second node from the second storage element.

    Abstract translation: 第一指针解引用器接收数据结构的第一节点的一部分的位置。 第一个节点要存储在第一个存储元件中。 从数据结构的第一个节点获取第一个指针。 基于第一指针确定数据结构的第二节点的一部分的位置。 第二节点将被存储在第二存储元件中。 数据结构的第二节点的部分的位置被发送到第二指针解引用器,其将从第二存储元件访问第二节点的该部分。

    Compute near memory convolution accelerator

    公开(公告)号:US11726950B2

    公开(公告)日:2023-08-15

    申请号:US16586975

    申请日:2019-09-28

    CPC classification number: G06F15/8046 G06F17/153 G06N3/063

    Abstract: A compute near memory (CNM) convolution accelerator enables a convolutional neural network (CNN) to use dedicated acceleration to achieve efficient in-place convolution operations with less impact on memory and energy consumption. A 2D convolution operation is reformulated as 1D row-wise convolution. The 1D row-wise convolution enables the CNM convolution accelerator to process input activations row-by-row, while using the weights one-by-one. Lightweight access circuits provide the ability to stream both weights and input rows as vectors to MAC units, which in turn enables modules of the CNM convolution accelerator to implement convolution for both [1×1] and chosen [n×n] sized filters.

    Compute near memory with backend memory

    公开(公告)号:US11251186B2

    公开(公告)日:2022-02-15

    申请号:US16827542

    申请日:2020-03-23

    Abstract: Examples herein relate to a memory device comprising an eDRAM memory cell, the eDRAM memory cell can include a write circuit formed at least partially over a storage cell and a read circuit formed at least partially under the storage cell; a compute near memory device bonded to the memory device; a processor; and an interface from the memory device to the processor. In some examples, circuitry is included to provide an output of the memory device to emulate output read rate of an SRAM memory device comprises one or more of: a controller, a multiplexer, or a register. Bonding of a surface of the memory device can be made to a compute near memory device or other circuitry. In some examples, a layer with read circuitry can be bonded to a layer with storage cells. Any layers can be bonded together using techniques described herein.

Patent Agency Ranking