뉴로모픽 소자
    31.
    发明申请
    뉴로모픽 소자 审中-公开

    公开(公告)号:WO2022240138A1

    公开(公告)日:2022-11-17

    申请号:PCT/KR2022/006655

    申请日:2022-05-10

    Abstract: 본 발명은 인지, 기억, 학습, 연산, 추론 등의 고차원의 기능을 수행하는 인간의 신경계 구조 및 기능을 모방하는 전자 소자인 뉴로모픽 소자에 관한 것으로, 본 발명에 따른 뉴로모픽 소자는 자기조립 구조, 특히 블록 공중합체의 자기조립 구조에 대응하는 나노 패턴을 포함하여, 시냅스가 병렬적으로 연결되어 있으며 무작위로 위치하고, 시냅스 발현이 확률적으로 발생하여 신경계적 특성이 구현되는 뉴로모픽 소자에 관한 것이다.

    一种数据处理方法及装置
    32.
    发明申请

    公开(公告)号:WO2022237865A1

    公开(公告)日:2022-11-17

    申请号:PCT/CN2022/092404

    申请日:2022-05-12

    Inventor: 郭佳 杨晨阳 王坚

    Abstract: 本申请实施例涉及一种数据处理方法及装置,该方法包括:获取第一数据,所述第一数据根据优化问题的优化目标和第二数据确定;将第一数据输入第一机器学习模型中,得到第一推理结果,可以减少机器学习过程中的训练数据的数量,降低训练复杂度。

    SYSTEM, METHOD, AND COMPUTER DEVICE FOR TRANSISTOR-BASED NEURAL NETWORKS

    公开(公告)号:WO2022232947A1

    公开(公告)日:2022-11-10

    申请号:PCT/CA2022/050717

    申请日:2022-05-06

    Applicant: BLUMIND INC.

    Abstract: Provided are computer systems, methods, and devices for operating an artificial neural network. The system includes neurons. The neurons include a plurality of synapses including charge-trapped transistors for processing input signals, an accumulation block for receiving drain currents from the plurality of synapses, the drain currents produced as an output of multiplication from the plurality of synapses, the drain currents calculating an amount of voltage multiplied by time, a capacitor for accumulating charge from the drain currents to act as short-term memory for accumulated signals, a discharge pulse generator for generating an output signal by discharging the accumulated charge during a discharging cycle, and a comparator for comparing an input voltage with a reference voltage. The comparator produces a first output if the input voltage is above the reference voltage and produces a second output if the input voltage is below the reference voltage.

    ARTIFICIAL NEURAL NETWORK RETRAINING IN MEMORY

    公开(公告)号:WO2022231816A1

    公开(公告)日:2022-11-03

    申请号:PCT/US2022/023801

    申请日:2022-04-07

    Abstract: An artificial neural network can be allocated to memory and operated. Performance of the artificial neural network can be periodically evaluated. The evaluation can include inputting a representative dataset to the artificial neural network and comparing an output of the artificial neural network to a known output for the representative dataset. The artificial neural network can be retrained at least partially in response to the evaluation yielding a sub-threshold result.

    一种面向神经网络模型计算的中间表示方法和装置

    公开(公告)号:WO2022222839A1

    公开(公告)日:2022-10-27

    申请号:PCT/CN2022/086809

    申请日:2022-04-14

    Abstract: 本发明公开了一种面向神经网络模型计算的中间表示方法和装置,包括如下步骤:S1:解析输入的模型文件以获取神经网络的拓扑结构信息;S2:构建逻辑计算图;S21:推导逻辑计算图中每个算子的物理布局信息;S22:推导逻辑计算图中每个算子的元属性;S23:推导逻辑计算图中每个算子的输入输出逻辑张量的描述信息;S3:构建物理计算图;S31:生成物理计算图;等步骤,本发明公开的基于元属性的用于神经网络模型计算的中间表示从算子层面原生地支持数据并行和模型并行以及流水并行。本发明公开的面向神经网络模型计算的中间表示方法和装置以计算表达式为基本单元,以张量作为整个计算表达式组成的计算图中流动的数据,以构图的方式实现神经网络模型的计算过程。

    基于忆阻器阵列的数据处理方法、电子装置

    公开(公告)号:WO2022222498A1

    公开(公告)日:2022-10-27

    申请号:PCT/CN2021/137845

    申请日:2021-12-14

    Applicant: 清华大学

    Abstract: 一种基于忆阻器阵列(901)的数据处理方法、电子装置(900)。该基于忆阻器阵列(901)的数据处理方法包括:获取多个第一模拟信号;设置忆阻器阵列(901),将对应于卷积处理的卷积参数矩阵的数据写入忆阻器阵列(901);将多个第一模拟信号分别输入设置后的忆阻器阵列(901)的多个列信号输入端,控制忆阻器阵列(901)操作以对多个模拟信号进行卷积处理,在忆阻器阵列(901)的多个行信号输出端分别得到执行卷积处理后的多个第二模拟信号。该基于忆阻器阵列(901)的数据处理方法通过将卷积参数矩阵多次映射于忆阻器阵列(901)中不同的多个忆阻器子阵列,实现一次计算即可得到卷积处理操作的所有结果,大大减少了移位所需的时间,降低了功耗,提高了计算速度。

    DEEP NEURAL NETWORK TRAINING
    37.
    发明申请

    公开(公告)号:WO2022214309A1

    公开(公告)日:2022-10-13

    申请号:PCT/EP2022/057494

    申请日:2022-03-22

    Inventor: GOKMEN, Tayfun

    Abstract: In a method of training a deep neural network, a processor initializes an element of an A matrix. The element may include a resistive processing unit. A processor determines incremental weight updates by updating the element with activation values and error values from a weight matrix multiplied by a chopper value. A processor reads an update voltage from the element. A processor determines a chopper product by multiplying the update voltage by the chopper value. A processor stores an element of a hidden matrix. The element of the hidden matrix may include a summation of continuous iterations of the chopper product. A processor updates a corresponding element of a weight matrix based on the element of the hidden matrix reaching a threshold state.

    一种基于神经网络的运算方法以及装置

    公开(公告)号:WO2022206138A1

    公开(公告)日:2022-10-06

    申请号:PCT/CN2022/073040

    申请日:2022-01-20

    Inventor: 蒲朝飞 张楠赓

    Abstract: 提供了一种基于神经网络的运算方法以及装置。具体实现方案为:获取原始图像,根据卷积核的尺寸和原始图像的尺寸计算运算周期总数以及各运算周期对应的图像矩阵,图像矩阵包括多行多列图像数据(S110);针对每个运算周期对应的图像矩阵,多个运算单元根据运算指令并行获取图像数据,对图像数据与预存储的权重数据进行乘积运算,得到中间数据(S120);对多个运算单元输出的中间数据进行求和,得到每个运算周期对应的运算结果(S130);统计运算周期总数内全部的运算结果,得到目标运算结果(S140)。单位时间内加快了整体运算速度,简化了读取数据逻辑,降低单个运算单元对数据的带宽需求。可以进行任意尺寸的卷积运算,提高了卷积运算效率,进而提高图像处理速度。

Patent Agency Ranking