PERFORMING XNOR EQUIVALENT OPERATIONS BY ADJUSTING COLUMN THRESHOLDS OF A COMPUTE-IN-MEMORY ARRAY

    公开(公告)号:US20210073619A1

    公开(公告)日:2021-03-11

    申请号:US16565308

    申请日:2019-09-09

    Abstract: A method performs XNOR-equivalent operations by adjusting column thresholds of a compute-in-memory array of an artificial neural network. The method includes adjusting an activation threshold generated for each column of the compute-in-memory array based on a function of a weight value and an activation value. The method also includes calculating a conversion bias current reference based on an input value from an input vector to the compute-in-memory array, the compute-in-memory array being programmed with a set of weights. The adjusted activation threshold and the conversion bias current reference are used as a threshold for determining the output values of the compute-in-memory array.

    DIFFERENTIAL ONE-TIME-PROGRAMMABLE (OTP) MEMORY ARRAY
    13.
    发明申请
    DIFFERENTIAL ONE-TIME-PROGRAMMABLE (OTP) MEMORY ARRAY 有权
    差分一次可编程(OTP)存储器阵列

    公开(公告)号:US20160268002A1

    公开(公告)日:2016-09-15

    申请号:US14656699

    申请日:2015-03-12

    CPC classification number: G11C17/08 G11C7/04 G11C17/12 G11C17/123

    Abstract: An OTP memory array includes a plurality of differential P-channel metal oxide semiconductor (PMOS) OTP memory cells programmable and readable in predetermined states of program and read operations, and is capable of providing sufficient margins against global process variations and temperature variations while being compatible with standard logic fin-shaped field effect transistor (FinFET) processes to obviate the need for additional masks and costs associated with additional masks.

    Abstract translation: OTP存储器阵列包括在预定的程序和读取操作状态下可编程和可读的多个差分P沟道金属氧化物半导体(PMOS)OTP存储器单元,并且能够在兼容的同时为全局工艺变化和温度变化提供足够的余量 具有标准逻辑鳍状场效应晶体管(FinFET)处理,以避免需要额外的掩模和与附加掩模相关联的成本。

    MERGING LITHOGRAPHY PROCESSES FOR GATE PATTERNING
    15.
    发明申请
    MERGING LITHOGRAPHY PROCESSES FOR GATE PATTERNING 有权
    用于门格式的合并算法

    公开(公告)号:US20150145070A1

    公开(公告)日:2015-05-28

    申请号:US14283168

    申请日:2014-05-20

    Abstract: Methods for fabricating devices on a die, and devices on a die. A method may include patterning a first region to create a first gate having a first gate length and a first contacted polysilicon pitch (CPP) with a first process. The first CPP is smaller than a single pattern lithographic limit. The method also includes patterning the first region to create a second gate having a second gate length or a second CPP with a second process. The second CPP is smaller than the single pattern lithographic limit. The second gate length is different than the first gate length.

    Abstract translation: 在模具上制造器件的方法以及管芯上的器件。 一种方法可以包括图案化第一区域以产生具有第一栅极长度的第一栅极和具有第一工艺的第一接触多晶硅间距(CPP)。 第一个CPP小于单一图案光刻极限。 该方法还包括图案化第一区域以产生具有第二栅极长度的第二栅极或具有第二工艺的第二CPP。 第二CPP小于单模光刻极限。 第二栅极长度不同于第一栅极长度。

    DIGITAL COMPUTE IN MEMORY
    17.
    发明申请

    公开(公告)号:US20230037054A1

    公开(公告)日:2023-02-02

    申请号:US17816285

    申请日:2022-07-29

    Abstract: Certain aspects generally relate to performing machine learning tasks, and in particular, to computation-in-memory architectures and operations. One aspect provides a circuit for in-memory computation. The circuit generally includes multiple bit-lines, multiple word-lines, an array of compute-in-memory cells, and a plurality of accumulators, each accumulator being coupled to a respective one of the multiple bit-lines. Each compute-in-memory cell is coupled to one of the bit-lines and to one of the word-lines and is configured to store a weight bit of a neural network.

    FOLDING COLUMN ADDER ARCHITECTURE FOR DIGITAL COMPUTE IN MEMORY

    公开(公告)号:US20230031841A1

    公开(公告)日:2023-02-02

    申请号:US17391718

    申请日:2021-08-02

    Abstract: Certain aspects provide an apparatus for performing machine learning tasks, and in particular, to computation-in-memory architectures. One aspect provides a circuit for in-memory computation. The circuit generally includes: a plurality of memory cells on each of multiple columns of a memory, the plurality of memory cells being configured to store multiple bits representing weights of a neural network, wherein the plurality of memory cells on each of the multiple columns are on different word-lines of the memory; multiple addition circuits, each coupled to a respective one of the multiple columns; a first adder circuit coupled to outputs of at least two of the multiple addition circuits; and an accumulator coupled to an output of the first adder circuit.

Patent Agency Ranking