-
公开(公告)号:US11443797B2
公开(公告)日:2022-09-13
申请号:US16798166
申请日:2020-02-21
Applicant: MACRONIX International Co., Ltd.
Inventor: Shu-Yin Ho , Hsiang-Pang Li , Yao-Wen Kang , Chun-Feng Wu , Yuan-Hao Chang , Tei-Wei Kuo
IPC: G11C11/54 , G11C11/4091 , G06N3/06 , G06N3/08 , G06F7/544 , G11C11/408 , G11C11/4094
Abstract: A method and an apparatus for neural network computation using adaptive data representation, adapted for a processor to perform multiply-and-accumulate operations on a memory having a crossbar architecture, are provided. The memory comprises multiple input and output lines crossing each other, multiple cells respectively disposed at intersections of the input and output lines, and multiple sense amplifiers respectively connected to the output lines. In the method, an input cycle of kth bits respectively in an input data is adaptively divided into multiple sub-cycles, wherein a number of the divided sub-cycles is determined according to a value of k. The kth bits of the input data are inputted to the input lines with the sub-cycles and computation results of the output lines are sensed by the sense amplifiers. The computation results sensed in each sub-cycle are combined to obtain the output data corresponding to the kth bits of the input data.
-
公开(公告)号:US11594277B2
公开(公告)日:2023-02-28
申请号:US17871811
申请日:2022-07-22
Applicant: MACRONIX International Co., Ltd.
Inventor: Shu-Yin Ho , Hsiang-Pang Li , Yao-Wen Kang , Chun-Feng Wu , Yuan-Hao Chang , Tei-Wei Kuo
IPC: G11C11/54 , G06N3/06 , G06N3/08 , G06F7/544 , G11C11/4091 , G11C11/408 , G11C11/4094
Abstract: A method for neural network computation using adaptive data representation, adapted for a processor to perform multiply-and-accumulate operations on a memory having a crossbar architecture, is provided. The memory comprises multiple input and output lines crossing each other, multiple cells respectively disposed at intersections of the input and output lines, and multiple sense amplifiers respectively connected to the output lines. In the method, an input cycle of kth bits respectively in an input data is adaptively divided into multiple sub-cycles, wherein a number of the divided sub-cycles is determined according to a value of k. The kth bits of the input data are inputted to the input lines with the sub-cycles and computation results of the output lines are sensed by the sense amplifiers. The computation results sensed in each sub-cycle are combined to obtain the output data corresponding to the kth bits of the input data.
-
公开(公告)号:US20220359003A1
公开(公告)日:2022-11-10
申请号:US17871811
申请日:2022-07-22
Applicant: MACRONIX International Co., Ltd.
Inventor: Shu-Yin Ho , Hsiang-Pang Li , Yao-Wen Kang , Chun-Feng Wu , Yuan-Hao Chang , Tei-Wei Kuo
IPC: G11C11/54 , G11C11/4091 , G11C11/408 , G06N3/06 , G06N3/08 , G06F7/544 , G11C11/4094
Abstract: A method for neural network computation using adaptive data representation, adapted for a processor to perform multiply-and-accumulate operations on a memory having a crossbar architecture, is provided. The memory comprises multiple input and output lines crossing each other, multiple cells respectively disposed at intersections of the input and output lines, and multiple sense amplifiers respectively connected to the output lines. In the method, an input cycle of kth bits respectively in an input data is adaptively divided into multiple sub-cycles, wherein a number of the divided sub-cycles is determined according to a value of k. The kth bits of the input data are inputted to the input lines with the sub-cycles and computation results of the output lines are sensed by the sense amplifiers. The computation results sensed in each sub-cycle are combined to obtain the output data corresponding to the kth bits of the input data.
-
公开(公告)号:US20210158160A1
公开(公告)日:2021-05-27
申请号:US17096575
申请日:2020-11-12
Applicant: MACRONIX International Co., Ltd.
Inventor: Wei-Chen Wang , Shu-Yin Ho , Chien-Chung Ho , Yuan-Hao Chang
Abstract: An operation method of an artificial neural network is provided. The operation method includes: dividing input information into a plurality of sub-input information, and expanding kernel information to generate expanded kernel information; performing a Fast Fourier Transform (FFT) on the sub-input information and the expanded kernel information to respectively generate a plurality of frequency domain sub-input information and frequency domain expanded kernel information; respectively performing a multiplying operation on the frequency domain expanded kernel information and the frequency domain sub-input information to respectively generate a plurality of sub-feature maps; and performing an inverse FFT on the sub-feature maps to provide a plurality of converted sub-feature maps for executing a feature extraction operation of the artificial neural network.
-
-
-