PARTIAL SUM MANAGEMENT AND RECONFIGURABLE SYSTOLIC FLOW ARCHITECTURES FOR IN-MEMORY COMPUTATION

    公开(公告)号:US20230047364A1

    公开(公告)日:2023-02-16

    申请号:US17398791

    申请日:2021-08-10

    Abstract: Methods and apparatus for performing machine learning tasks, and in particular, to a neural-network-processing architecture and circuits for improved handling of partial accumulation results in weight-stationary operations, such as operations occurring in compute-in-memory (CIM) processing elements (PEs). One example PE circuit for machine learning generally includes an accumulator circuit, a flip-flop array having an input coupled to an output of the accumulator circuit, a write register, and a first multiplexer having a first input coupled to an output of the write register, having a second input coupled to an output of the flip-flop array, and having an output coupled to a first input of the first accumulator circuit.

    HYBRID MACHINE LEARNING ARCHITECTURE WITH NEURAL PROCESSING UNIT AND COMPUTE-IN-MEMORY PROCESSING ELEMENTS

    公开(公告)号:US20230025068A1

    公开(公告)日:2023-01-26

    申请号:US17813834

    申请日:2022-07-20

    Abstract: Methods and apparatus for performing machine learning tasks, and in particular, a hybrid architecture that includes both neural processing unit (NPU) and compute-in-memory (CIM) elements. One example neural-network-processing circuit generally includes a plurality of CIM processing elements (PEs), a plurality of neural processing unit (NPU) PEs, and a bus coupled to the plurality of CIM PEs and to the plurality of NPU PEs. One example method for neural network processing generally includes processing data in a neural-network-processing circuit comprising a plurality of CIM PEs, a plurality of NPU PEs, and a bus coupled to the plurality of CIM PEs and to the plurality of NPU PEs; and transferring the processed data between at least one of the plurality of CIM PEs and at least one of the plurality of NPU PEs via the bus.

    SUB-FIN DEVICE ISOLATION
    4.
    发明申请
    SUB-FIN DEVICE ISOLATION 有权
    细分设备隔离

    公开(公告)号:US20160181161A1

    公开(公告)日:2016-06-23

    申请号:US14581244

    申请日:2014-12-23

    Abstract: A fin-based structure may include fins on a surface of a semiconductor substrate. Each of the fins may include a doped portion proximate to the surface of the semiconductor substrate. The fin-based structure may also include an isolation layer disposed between the fins and on the surface of the semiconductor substrate. The fin-based structure may also include a recessed isolation liner on sidewalls of the doped portion of the fins. An unlined doped portion of the fins may extend from the recessed isolation liner to an active potion of the fins at a surface of the isolation layer. The isolation layer is disposed on the unlined doped portion of the fins.

    Abstract translation: 鳍状结构可以包括半导体衬底的表面上的翅片。 每个翅片可以包括靠近半导体衬底的表面的掺杂部分。 鳍状结构还可以包括设置在散热片之间和半导体衬底的表面上的隔离层。 鳍状结构还可以包括在散热片的掺杂部分的侧壁上的凹陷的隔离衬垫。 翅片的无衬里的掺杂部分可以从凹入的隔离衬垫延伸到隔离层的表面处的翅片的活性部分。 隔离层设置在翅片的无衬里的掺杂部分上。

    DIGITAL COMPUTE IN MEMORY
    7.
    发明申请

    公开(公告)号:US20230037054A1

    公开(公告)日:2023-02-02

    申请号:US17816285

    申请日:2022-07-29

    Abstract: Certain aspects generally relate to performing machine learning tasks, and in particular, to computation-in-memory architectures and operations. One aspect provides a circuit for in-memory computation. The circuit generally includes multiple bit-lines, multiple word-lines, an array of compute-in-memory cells, and a plurality of accumulators, each accumulator being coupled to a respective one of the multiple bit-lines. Each compute-in-memory cell is coupled to one of the bit-lines and to one of the word-lines and is configured to store a weight bit of a neural network.

    FOLDING COLUMN ADDER ARCHITECTURE FOR DIGITAL COMPUTE IN MEMORY

    公开(公告)号:US20230031841A1

    公开(公告)日:2023-02-02

    申请号:US17391718

    申请日:2021-08-02

    Abstract: Certain aspects provide an apparatus for performing machine learning tasks, and in particular, to computation-in-memory architectures. One aspect provides a circuit for in-memory computation. The circuit generally includes: a plurality of memory cells on each of multiple columns of a memory, the plurality of memory cells being configured to store multiple bits representing weights of a neural network, wherein the plurality of memory cells on each of the multiple columns are on different word-lines of the memory; multiple addition circuits, each coupled to a respective one of the multiple columns; a first adder circuit coupled to outputs of at least two of the multiple addition circuits; and an accumulator coupled to an output of the first adder circuit.

    LOW ENERGY AND SMALL FORM FACTOR PACKAGE

    公开(公告)号:US20250087640A1

    公开(公告)日:2025-03-13

    申请号:US18465900

    申请日:2023-09-12

    Abstract: Disclosed are packages that may include first and second substrates with first and second chips therebetween. The first chip may be a logic chip and the second chip may be a processing near memory (PNM) chip. The active side of the first chip may face the first substrate and the active side of the second chip may face the second substrate. The first chip may be encapsulated by a first mold, and the second chip may be encapsulated by a second mold. The first and/or the second molds may be thermally conductive. A third chip (e.g., a memory) may be on the second substrate opposite the second chip. The second substrate may include very short vertical connections that connect the active sides of the second and third chips.

Patent Agency Ranking