摘要:
A method for data processing in a neural network system and a neural network system are provided. The method includes: inputting training data into a neural network system to obtain first output data, and adjusting, based on a deviation between the first output data and target output data, a weight value stored in at least one in-memory computing unit in some neural network arrays in a plurality of neural network arrays in the neural network system using parallel acceleration. The some neural network arrays are configured to implement computing of some neural network layers in the neural network system. The method may improve performance and recognition accuracy of the neural network system.
摘要:
This application provides a storage and computing unit and a chip, and may be applied to a neural network system. The storage and computing unit includes: a first transistor, a memristor, and a resistance modulation unit, where a first port of the resistance modulation unit and a first port of the memristor are connected to a first electrode of the first transistor, and the first electrode of the first transistor is configured to control the first transistor to be connected and disconnected; the resistance modulation unit is configured to adjust, based on a resistance of the memristor, a voltage applied to the first electrode of the first transistor; the memristor is configured to store first data, where the resistance of the memristor is used to indicate the first data; and when a voltage used to indicate second data is input to a second electrode of the first transistor, the first transistor is configured to output a computation result of the first data and the second data from a third electrode of the first transistor. Therefore, a data computation throughput can be significantly improved, and energy consumption of a computing system can be reduced.
摘要:
Embodiments of this disclosure relate to the multimedia processing field, and provide a video frame interpolation method and apparatus, and a device. In the video frame interpolation method in this disclosure, a first image at first time, a second image at second time, and sensor data captured by a dynamic vision sensor apparatus are obtained, and the sensor data includes dynamic event data between the first time and the second time. At least one target image is determined based on the first image, the second image, and the sensor data, where the at least one target image is an image corresponding to at least one target time between the first time and the second time. The dynamic event data is used to help compensate for motion information missing from existing image data. This implements accurate prediction of an intermediate image, and improves image prediction effect.
摘要:
Embodiments of this disclosure provide a data processing method and apparatus, and a device, and relate to the field of machine vision technologies. The data processing method of this disclosure includes: receiving an event data stream, where the event data stream includes at least a first event data item and a second event data item, the first event data item includes a first timestamp for obtaining the first event data item, the second event data item includes a second timestamp for obtaining the second event data item, and the second event data item is obtained most recently before obtaining the first event data item; and obtaining a compressed event data stream corresponding to the event data stream, where the compressed event data stream includes at least a first compressed event data item corresponding to the first event data item, the first compressed event data item includes first time information, and the first time information is a time difference between the first timestamp and the second timestamp. In this way, time information of an event data item is compressed. Because a time difference requires less storage space, storage space can be reduced, and subsequent processing is more efficient and faster.
摘要:
A spiking neural network circuit and a spiking neural network-based calculation method are disclosed. The circuit includes a plurality of decompression modules and a calculation module. The plurality of decompression modules are configured to separately obtain a plurality of weight values in a compressed weight matrix and identifiers of a plurality of corresponding output neurons based on information about a plurality of input neurons. Each of the plurality of decompression modules is configured to concurrently obtain weight values with a same row number in the compressed weight matrix and identifiers of a plurality of output neurons corresponding to the weight values with the same row number. Each row of the compressed weight matrix has a same quantity of non-zero weight values. Each row of weight values corresponds to one input neuron. The calculation module is configured to separately determine corresponding membrane voltages of the plurality of output neurons based on the plurality of weight values. Calculation efficiency can be improved by using the spiking neural network circuit.
摘要:
Embodiments of the present invention disclose an ATCA data exchange system, an exchange board and a data exchange method. The ATCA data exchange system includes: a backboard, at least one exchange board and at least one service board. The exchange board includes at least one Fabric port group, and each Fabric port group is connected to a service board through the backboard to form a first exchange channel for broadband service data exchange, where the Fabric port group includes four difference sending and receiving port pairs, and each difference sending and receiving port pair includes a pair of difference receiving ports and a pair of difference sending ports. A connector 20 in the Fabric interface of the exchange board includes at least one difference sending and receiving port pair, and each difference sending and receiving port pair is connected to a service board through the backboard to form a second exchange channel that is independent of the first exchange channel, for narrowband service data exchange that is independent of the broadband data exchange through the second exchange channel separately. Through the embodiments of the present invention, processing of narrowband data is simplified, and time delay is decreased.