-
公开(公告)号:WO2022240138A1
公开(公告)日:2022-11-17
申请号:PCT/KR2022/006655
申请日:2022-05-10
Applicant: 한국과학기술원
IPC: H01L45/00 , G06N3/063 , H01L45/122 , H01L45/1253
Abstract: 본 발명은 인지, 기억, 학습, 연산, 추론 등의 고차원의 기능을 수행하는 인간의 신경계 구조 및 기능을 모방하는 전자 소자인 뉴로모픽 소자에 관한 것으로, 본 발명에 따른 뉴로모픽 소자는 자기조립 구조, 특히 블록 공중합체의 자기조립 구조에 대응하는 나노 패턴을 포함하여, 시냅스가 병렬적으로 연결되어 있으며 무작위로 위치하고, 시냅스 발현이 확률적으로 발생하여 신경계적 특성이 구현되는 뉴로모픽 소자에 관한 것이다.
-
-
公开(公告)号:WO2022232947A1
公开(公告)日:2022-11-10
申请号:PCT/CA2022/050717
申请日:2022-05-06
Applicant: BLUMIND INC.
Inventor: GOSSON, John Linden , LEVINSON, Roger
Abstract: Provided are computer systems, methods, and devices for operating an artificial neural network. The system includes neurons. The neurons include a plurality of synapses including charge-trapped transistors for processing input signals, an accumulation block for receiving drain currents from the plurality of synapses, the drain currents produced as an output of multiplication from the plurality of synapses, the drain currents calculating an amount of voltage multiplied by time, a capacitor for accumulating charge from the drain currents to act as short-term memory for accumulated signals, a discharge pulse generator for generating an output signal by discharging the accumulated charge during a discharging cycle, and a comparator for comparing an input voltage with a reference voltage. The comparator produces a first output if the input voltage is above the reference voltage and produces a second output if the input voltage is below the reference voltage.
-
公开(公告)号:WO2022231816A1
公开(公告)日:2022-11-03
申请号:PCT/US2022/023801
申请日:2022-04-07
Applicant: MICRON TECHNOLOGY, INC.
Inventor: KALE, Poorna , TIKU, Saideep
Abstract: An artificial neural network can be allocated to memory and operated. Performance of the artificial neural network can be periodically evaluated. The evaluation can include inputting a representative dataset to the artificial neural network and comparing an output of the artificial neural network to a known output for the representative dataset. The artificial neural network can be retrained at least partially in response to the evaluation yielding a sub-threshold result.
-
公开(公告)号:WO2022222839A1
公开(公告)日:2022-10-27
申请号:PCT/CN2022/086809
申请日:2022-04-14
Applicant: 之江实验室
Abstract: 本发明公开了一种面向神经网络模型计算的中间表示方法和装置,包括如下步骤:S1:解析输入的模型文件以获取神经网络的拓扑结构信息;S2:构建逻辑计算图;S21:推导逻辑计算图中每个算子的物理布局信息;S22:推导逻辑计算图中每个算子的元属性;S23:推导逻辑计算图中每个算子的输入输出逻辑张量的描述信息;S3:构建物理计算图;S31:生成物理计算图;等步骤,本发明公开的基于元属性的用于神经网络模型计算的中间表示从算子层面原生地支持数据并行和模型并行以及流水并行。本发明公开的面向神经网络模型计算的中间表示方法和装置以计算表达式为基本单元,以张量作为整个计算表达式组成的计算图中流动的数据,以构图的方式实现神经网络模型的计算过程。
-
公开(公告)号:WO2022222498A1
公开(公告)日:2022-10-27
申请号:PCT/CN2021/137845
申请日:2021-12-14
Applicant: 清华大学
Abstract: 一种基于忆阻器阵列(901)的数据处理方法、电子装置(900)。该基于忆阻器阵列(901)的数据处理方法包括:获取多个第一模拟信号;设置忆阻器阵列(901),将对应于卷积处理的卷积参数矩阵的数据写入忆阻器阵列(901);将多个第一模拟信号分别输入设置后的忆阻器阵列(901)的多个列信号输入端,控制忆阻器阵列(901)操作以对多个模拟信号进行卷积处理,在忆阻器阵列(901)的多个行信号输出端分别得到执行卷积处理后的多个第二模拟信号。该基于忆阻器阵列(901)的数据处理方法通过将卷积参数矩阵多次映射于忆阻器阵列(901)中不同的多个忆阻器子阵列,实现一次计算即可得到卷积处理操作的所有结果,大大减少了移位所需的时间,降低了功耗,提高了计算速度。
-
公开(公告)号:WO2022214309A1
公开(公告)日:2022-10-13
申请号:PCT/EP2022/057494
申请日:2022-03-22
Inventor: GOKMEN, Tayfun
Abstract: In a method of training a deep neural network, a processor initializes an element of an A matrix. The element may include a resistive processing unit. A processor determines incremental weight updates by updating the element with activation values and error values from a weight matrix multiplied by a chopper value. A processor reads an update voltage from the element. A processor determines a chopper product by multiplying the update voltage by the chopper value. A processor stores an element of a hidden matrix. The element of the hidden matrix may include a summation of continuous iterations of the chopper product. A processor updates a corresponding element of a weight matrix based on the element of the hidden matrix reaching a threshold state.
-
公开(公告)号:WO2022212107A1
公开(公告)日:2022-10-06
申请号:PCT/US2022/021221
申请日:2022-03-21
Applicant: SAMBANOVA SYSTEMS, INC.
Inventor: NAMA, Tejas Nagendra Babu , CHAPHEKAR, Ruddhi , SIVARAMAKRISHNAN, Ram , PRABHAKAR, Raghu , JAIRATH, Sumti , WANG, Junjue , LIANG, Kaizhao , FUCHS, Adi , MUSADDIQ, Matheen , SUJEETH, Arvind Krishna
Abstract: Disclosed is a data processing system that includes compile time logic configured to section a graph into a sequence of sections, and configure each section of the sequence of sections such that an input layer of a section processes an input, one or more intermediate layers of the corresponding section processes corresponding one or more intermediate outputs, and a final layer of the corresponding section generates a final output. The final output has a non-overlapping final tiling configuration, the one or more intermediate outputs have corresponding one or more overlapping intermediate tiling configurations, and the input has an overlapping input tiling configuration. The compile time logic is further to determine the various tiling configurations by starting from the final layer and reverse traversing through the one or more intermediate layers, and ending with the input layer.
-
公开(公告)号:WO2022206138A1
公开(公告)日:2022-10-06
申请号:PCT/CN2022/073040
申请日:2022-01-20
Applicant: 嘉楠明芯(北京)科技有限公司
Abstract: 提供了一种基于神经网络的运算方法以及装置。具体实现方案为:获取原始图像,根据卷积核的尺寸和原始图像的尺寸计算运算周期总数以及各运算周期对应的图像矩阵,图像矩阵包括多行多列图像数据(S110);针对每个运算周期对应的图像矩阵,多个运算单元根据运算指令并行获取图像数据,对图像数据与预存储的权重数据进行乘积运算,得到中间数据(S120);对多个运算单元输出的中间数据进行求和,得到每个运算周期对应的运算结果(S130);统计运算周期总数内全部的运算结果,得到目标运算结果(S140)。单位时间内加快了整体运算速度,简化了读取数据逻辑,降低单个运算单元对数据的带宽需求。可以进行任意尺寸的卷积运算,提高了卷积运算效率,进而提高图像处理速度。
-
40.
公开(公告)号:WO2022203809A1
公开(公告)日:2022-09-29
申请号:PCT/US2022/017855
申请日:2022-02-25
Applicant: QUALCOMM INCORPORATED
IPC: G06N3/063
Abstract: Various embodiments include methods and devices for processing a neural network by an artificial intelligence (AI) processor. Embodiments may include receiving an AI processor operating condition information, dynamically adjusting an AI quantization level for a segment of a neural network in response to the operating condition information, and processing the segment of the neural network quantization using the adjusted AI quantization level.
-
-
-
-
-
-
-
-
-